Subscribe: CircleID: Featured Blogs
http://www.circleid.com/rss/rss_comm/
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
complainant  disputed domain  disputed  domain names  domain  end  google  internet  network  new  panel  trademark  udrp 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs



Latest blogs postings on CircleID



Updated: 2017-11-16T21:27:00-08:00

 



UDRPs Filed - Brand Owners Take Note

2017-11-16T13:27:00-08:00

After being in the domain industry for over 15 years, there aren't too many things that catch me by surprise, but recently a few UDRP filings have me scratching my head. Both ivi.com and ktg.com have had UDRPs filed against them, and I have to say for anyone holding a valuable domain name, it's a cautionary tale and one that should have folks paying attention to the outcome of each. Just as a refresher, to be successful in a UDRP filing, the complainant must prove the following: the domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; and the registrant has no rights or legitimate interests in respect of the domain name; and the domain name has been registered and is being used in bad faith. With that in mind, let's look a little closer at the details of these two troubling UDRP filings. Ivi.com is registered to WebMD, LLC, a long-time provider of health and wellness information on the Internet, and the domain has been registered since 1992. The domain name currently doesn't resolve to any content, so it's not actively being used. The complainant is Equipo IVI SL, an assisted reproduction group based in Spain. They appear to operate their company off of the domain ivi-fertility.com. According to their website, IVI appears to have been initially founded in 1990 in Valencia. The domain ktg.com is registered to HUKU LLC which appears to be an entity based in Belize and has been registered since at least 2001. According to a reverse WHOIS lookup, this entity owns a few hundred generic domain names in a variety of extensions. The domain ktg.com resolves to a Domain Holdings page with a message stating that this domain may be for sale. The complainant is a company called Kitchens To Go which operates off the kitchenstogo.com domain which was registered in 1998. They also appear to operate the k-t-g.com domain name as well. Based on prima facie evidence, I'm doubtful that either of these UDRP filings should be successful — but then again the domain imi.com was recently handed over to the complainant in a case which appears to have very similar circumstances to these latest two. It should be noted though in that case, the registrant did not even respond to the UDRP. What can brand owners do to ensure that they don't find themselves losing a domain in a questionable UDRP filing? A few things: Ensure your WHOIS information is up-to-date and accurate so that any correspondence sent to the contacts is received. People think nothing of value comes to those published contacts, but UDRP filings would certainly be something you'd want to make sure you received. If you do find a long-held domain being subject to a UDRP (or any UDRP for that matter), make sure you file a response so that you don't leave the complainant as the only voice in front of the UDRP panelists. Make sure that your registrar has a procedure in place to notify you of any UDRP filing they may receive for your domains. In addition to communication to the domain owner, the registrar of record also receives notification, and they should be passing those notifications on to their clients. It will be very interesting to see how these two UDRP filings play out, and we'll be sure to report back once the decisions have been made public. Written by Matt Serlin, SVP, Client Services and Operations at BrandsightFollow CircleID on TwitterMore under: Domain Names, Intellectual Property, UDRP [...]



When UDRP Consolidation Requests Go Too Far

2017-11-16T07:25:00-08:00

Although including multiple domain names in a single UDRP complaint can be a very efficient way for a trademark owner to combat cybersquatting, doing so is not always appropriate. One particularly egregious example involves a case that originally included 77 domain names — none of which the UDRP panel ordered transferred to the trademark owner, simply because consolidation against the multiple registrants of the domain names was improper. The UDRP case, filed by O2 Worldwide Limited, is an important reminder to trademark owners that they should not overreach when filing large complaints — at least when the disputed domain names are held by different registrants. The Same Domain-Name Holder Under the UDRP rules, a "complaint may relate to more than one domain name, provided that the domain names are registered by the same domain-name holder." As a result, many UDRP complaints include multiple domain names — from two to as many as more than 1,500. While this UDRP rule may seem straightforward, it can become more complicated in practice, especially as some clever cybersquatters try to hide behind aliases to frustrate trademark owners. Where the registrants appear to be different, the WIPO Overview of WIPO Panel Views on Selected UDRP Questions, Third Edition, says that UDRP panels often consider the following in considering whether it is proper to include multiple domain names in a single complaint: "whether (i) the domain names or corresponding websites are subject to common control, and (ii) the consolidation would be fair and equitable to all parties." The Overview adds: "Procedural efficiency would also underpin panel consideration of such a consolidation scenario." Not Procedurally Efficient In the O2 case, the panel found that consolidation was not appropriate, based on a most unusual set of facts. O2 had argued that "unifying features… link all of the domains" and that a single individual "maintain[ed] common control" over all of the domain names. But the panel strongly disagreed, noting that 25 different entities were named as respondents for the 77 domain names in the original complaint. Incredibly, the panel said: The administrative procedure that the [WIPO] Center was required to undertake as a result of this filing involved: (i) numerous communications with four different Registrars; (ii) the withdrawal of the Complaint against 11 of the domain names due to the fact that they were no longer registered; (iii) the receipt of 20 separate communications, from 12 different Respondents or Other Submissions, respectively, each of whom appeared to be operating independently of the others and whose positions were not identical; (iv) the receipt of two separate formal Responses; and (v) the filing of one unsolicited Supplemental Filing by the Complainant. This, the panel wrote, created an "administrative burden" that was "undue — and certainly not procedurally efficient." Further, the panel said that because "the Respondents appear to be separate persons whose positions are not necessarily identical," treating them alike in a single proceeding "is unlikely to be fair and equitable." Not only did the panel reject O2's consolidation arguments, but it also rejected O2's request to proceed against any of the disputed domain names: In the Panel's view, what the Complainant has sought to do is throw a large number of disputed domain names registered by a large number of separate Respondents into one Complaint, request consolidation on the basis of a general assertion of connectedness, rely on the Center to verify the situation of every disputed domain name and Respondent to identify those against whom the Complaint can proceed, and rely on the Panel to work through the case of every Respondent to determine in respect of whom consolidation would be fair and equitable. The Panel does not wish to encourage Complainants to adopt this approach. Accordingly, the Panel will not accede to the Complainant's request to allow consolidation to proceed in res[...]



Your Online Freedoms are Under Threat - 2017 Freedom on the Net Report

2017-11-14T08:08:00-08:00

As more people get online every day, Internet Freedom is facing a global decline for the 7th year in a row. Today, Freedom House released their 2017 Freedom on the Net report, one of the most comprehensive assessments of countries' performance regarding online freedoms. The Internet Society is one of the supporters of this report. We think it brings solid and needed evidence-based data in an area that fundamentally impacts user trust. Looking across 65 countries, the report highlights several worrying trends, including: manipulation of social media in democratic processes restrictions of virtual private networks (VPNs) censoring of mobile connectivity attacks against netizens and online journalists Elections prove to be particular tension points for online freedoms (see also Freedom House's new Internet Freedom Election Monitor). Beyond the reported trend towards more sophisticated government attempts to control online discussions, the other side of the coin is an increase in restrictions to Internet access, whether through shutting down networks entirely, or blocking specific communication platforms and services. These Internet shutdowns are at the risk of becoming the new normal. In addition to their impact on freedom of expression and peaceful assembly, shutdowns generate severe economic costs, affecting entire economies [1] and the livelihood of tech entrepreneurs, often in regions that would benefit the most from digital growth. We need to build on these numbers as they open a new door to ask governments for accountability. By adopting the U.N. Sustainable Developed Goals (SDGs) last year, governments of the world have committed to leveraging the power of the Internet in areas such as education, health and economic growth. Cutting off entire populations from the Internet sets the path in the wrong direction. Mindful that there is urgency to address this issue, the Internet Society is releasing today a new policy brief on Internet shutdowns, which provides an entry into this issue, teases various impacts of such measures and offers some preliminary recommendations to governments and other stakeholders. Of course, this can only be the beginning of any action and we need everyone to get informed and make their voices heard on shutdowns and other issues related to online freedoms. Here is what you can do: Follow the live video stream of the launch event for Freedom House's 2017 Freedom on the Net report. The Internet Society's Vice President of Global Policy Development, Sally Wentworth, is among the panelists. (14 November 2017, 9:30 am EDT) Read the new Freedom on the Net report and pay particular attention to country reports relevant to you. Ask people to spread the word that Internet shutdowns cost everyone. Governments should stop using Internet shutdowns and other means of denying access as a policy tool: we must keep the Internet on. Tweet using #ShapeTomorrow and #NetFreedom2017. You'll find more tweets on the Internet Society's Twitter account. Read the Internet Society's new Policy brief on Internet shutdowns, and look back at our paper on Internet Content Blocking for a deeper technical assessment on some common content filtering techniques. Read again ISOC's findings on personal rights and freedoms from our 2017 Global Internet Report. Join the Keep It On movement to collectively call for an end to shutdowns [1] Among other similar studies, Brookings assessed a cost of about USD 2.4 billion resulting from shutdowns across countries evaluated between July 1, 2015 and June 30, 2016. Written by Nicolas Seidler, Senior Policy advisorFollow CircleID on TwitterMore under: Censorship, Internet Governance, Policy & Regulation [...]



Telesat - a Fourth Satellite Internet Competitor

2017-11-13T12:58:00-08:00

Telesat will begin with only 117 satellites while SpaceX and the others plan to launch thousands — how can they hope to compete? The answer lies in their patent-pending deployment plan. I've been following SpaceX, OneWeb and Boeing satellite Internet projects, but have not mentioned Telesat's project. Telesat is a Canadian company that has provided satellite communication service since 1972. (They claim their "predecessors" worked on Telstar, which relayed the first intercontinental transmission, in 1962). Earlier this month, the FCC approved Telesat's petition to provide Internet service in the US using a proposed constellation of 117 low-Earth orbit (LEO) satellites. Note that Telesat will begin with only 117 satellites while SpaceX and the others plan to launch thousands — how can they hope to compete? The answer lies in their patent-pending approach to deployment. They plan a polar-orbit constellation of six equally-spaced (30 degrees apart) planes inclined at 99.5 degrees at an altitude of approximately 1,000 kilometers and an inclined-orbit constellation of five equally-spaced (36 degrees apart) planes inclined at 37.4 degrees at an approximate altitude of 1,248 kilometers. Telesat's LEO constellation will combine polar (green) and inclined (red) orbits. This hybrid polar-inclined constellation will result in global coverage with a minimum elevation angle of approximately 20 degrees using their ground stations in Svalbard Norway and Inuvic Canada. Their analysis shows that 168 polar-orbit satellites would be required to match the global coverage of their 117-satellite hybrid constellation and according to Erwin Hudson, Vice President of Telesat LEO, their investment per Gbps of sellable capacity will be as low, or lower than, any existing or announced satellite system. They also say their hybrid architecture will simplify spectrum-sharing. An inter-constellation route (source)The figure (right) from their patent application illustrates hybrid routing. The first hop in a route to the Internet for a user in a densely populated area like Mexico City (410) would be to a visible inclined-orbit satellite (420). The next hop would be to a satellite in the polar-orbit constellation (430), then to a ground station on the Internet (440). The up and downlinks will use radio frequencies, and the inter-satellite links will use optical transmission. Since the ground stations are in sparsely populated areas and the distances between satellites are low near the poles, capacity will be balanced. This scheme may result in Telesat customers experiencing slightly higher latencies than those of their competitors, but the difference will be negligible for nearly all applications. They will launch two satellites this year — one on a Russian Soyuz rocket and the other on an Indian Polar Satellite Launch Vehicle. These will be used in tests and Telesat says a number of their existing geostationary satellite customers are enthusiastic about participating in the tests. They will launch their phase 2 satellites beginning in 2020 and commence commercial service in 2021. They consider 25 satellites per launch vehicle a practical number so they will have global availability before their competitors. Their initial capacity will be relatively low, but they will add satellites as demand grows. Like OneWeb, Telesat will work with strategic partners for launches and design and production of satellites and antennae. They have not yet selected those partners, but are evaluating candidates and are confident they will be ready in time for their launch dates. Their existing ground stations give them a head start. (OneWeb just contracted with Hughes for ground stations). Their satellites will work with mechanical and electronically steered antennae, and each satellite will have a wide-area coverage mode for broadcast and distributing software updates. Their patent application mentions community broadband and hotspots, large enterprises, sh[...]



Google Now a Target for Regulation

2017-11-13T11:35:00-08:00

Headline in the Washington Post: "Tech companies pushed for net neutrality. Now Sen. Al Franken wants to turn it on them." 9 Nov 2017 The time was — way back around the turn of the century — when all Internet companies believed that the Internet should be free from government regulation. I lobbied along with Google and Amazon to that end (there were no Twitter and Facebook then); we were successful over the objection of traditional telcos who wanted the protection of regulation. The Federal Communications Commission (FCC) under both Democrats and Republicans agreed to forbear from regulating the Internet the way they regulate the telephone network; the Internet flourished, to put it mildly. Fast forward to 2015. Google and other Internet giants and their trade group, the Internet Association, were successful in convincing the Obama FCC to reverse that policy and regulate Internet Service Providers (ISPs) under the same regulation which helped to stifle innovation in telephony for decades. The intent, according to the Internet Association, was to protect Net Neutrality (a very good name) and assure that ISPs didn't either censor or prefer their own content over the content of others — Google, for example. The regulation was acknowledged to be preemptive - ISPs weren't discriminating but they might. This spring Trump's FCC Chair, Ajit Pai, announced the beginning of an effort to repeal the 2015 regulations and return the Internet to its former lightly regulated state. The Internet Association and its allies mounted a massive online campaign against deregulation in order, they said, to protect Net Neutrality. One of their allies was the Open Market Initiative, which was then part of The New America Foundation. More about them below. I blogged to Google: "You run a fantastically successful business. You deliver search results so valuable that we willingly trade the history of our search requests for free access. Your private network of data centers, content caches and Internet connections assure that Google data pops quickly off our screen. Your free Chrome browser, Android operating system, and gmail see our communication before it gets to the Internet and gets a last look at what comes back from the Internet before passing it on to us. You make billions by monetizing this information with at least our implied consent. I mean all this as genuine praise. "But I think you've made a mistake by inviting the regulatory genie on to the Internet. Have you considered that Google is likely to be the next regulatory target?" It didn't take long. In August the European Union declared a penalty against Google. Barry Lynn of the Open Market Initiative posted praise for the EU decision on the New America website. According to the NY Times: "The New America Foundation has received more than $21 million from Google; its parent company's executive chairman, Eric Schmidt; and his family's foundation since the think tank's founding in 1999. That money helped to establish New America as an elite voice in policy debates on the American left and helped Google shape those debates… "Hours after this article was published online Wednesday morning, Ms. Slaughter announced that the think tank had fired Mr. Lynn on Wednesday for 'his repeated refusal to adhere to New America's standards of openness and institutional collegiality.'" Mr. Lynn and his colleagues immediately founded The Open Market Institute. The front page of their websites says: "Amazon, Google and other online super-monopolists, armed with massive dossiers of data on every American, are tightening their grip on the most vital arteries of commerce, and their control over the media we use to share news and information with one another." Sen. Al Franken and the Open Market Institute held an event which led to the WaPo headline and the article which begins: "For years, tech companies have insisted that they're different from everything e[...]



Court Finds Anti-Malware Provider Immune Under CDA for Calling Competitor's Product Security Threat

2017-11-13T10:26:00-08:00

Plaintiff anti-malware software provider sued defendant — who also provides software that protects internet users from malware, adware etc. — bringing claims for false advertising under the Section 43(a) of Lanham Act, as well as other business torts [Enigma Software Group v. Malwarebytes Inc., 2017 WL 5153698 (N.D. Cal., November 7, 2017)]. Plaintiff claimed that defendant wrongfully revised its software's criteria to identify plaintiff's software as a security threat when, according to plaintiff, its software is "legitimate" and posed no threat to users' computers. Defendant moved to dismiss the complaint for failure to state a claim upon which relief may be granted. It argued that the provisions of the Communications Decency Act at Section 230(c)(2) immunized it from plaintiff's claims. Section 230(c)(2) reads as follows: No provider or user of an interactive computer service shall be held liable on account of — (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [paragraph (A)]. Specifically, defendant argued that the provision of its software using the criteria it selected was an action taken to make available to others the technical means to restrict access to malware, which is objectionable material. The court agreed with defendant's argument that the facts of this case were "indistinguishable" from the Ninth Circuit's opinion in in Zango, Inc. v. Kaspersky, 568 F.3d 1169 (9th Cir. 2009), in which the court found that Section 230 immunity applied in the anti-malware context. Here, plaintiff had argued that immunity should not apply because malware is not within the scope of "objectionable" material that it is okay to seek to filter in accordance with 230(c)(2)(B). Under plaintiff's theory, malware is "not remotely related to the content categories enumerated" in Section 230(c)(2)(A), which (B) refers to. In other words, the objectionableness of malware is of a different nature than the objectionableness of material that is obscene, lewd, lascivious, filthy, excessively violent, harassing. The court rejected this argument on the basis that the determination of whether something is objectionable is up to the provider's discretion. Since defendant found plaintiff's software "objectionable" in accordance with its own judgment, the software qualifies as "objectionable" under the statute. Plaintiff also argued that immunity should not apply because defendant's actions taken to warn of plaintiff's software were not taken in good faith. But the court applied the plain meaning of the statute to reject this argument — the good faith requirement only applies to conduct under Section 230(c)(2)(A), not (c)(2)(B). Finally, plaintiff had argued that immunity should not apply with respect to its Lanham Act claim because of Section 230(e)(2), which provides that "nothing in [Section 230] shall be construed to limit or expand any law pertaining to intellectual property." The court rejected this argument because although the claim was brought under the Lanham Act, which includes provisions concerning trademark infringement (which clearly relates to intellectual property), the nature of the Lanham Act claim here was for unfair competition, which is not considered to be an intellectual property claim. Written by Evan D. Brown, AttorneyFollow CircleID on TwitterMore under: Law, Malware [...]



Weaponizing the Internet Using the "End-to-end Principle" Myth

2017-11-12T14:39:00-08:00

At the outset of the Internet Engineering Task Force (IETF) 100th meeting, a decidedly non-technical initial "Guide for human rights protocol considerations” was just published. Although the IETF has always remained true to its DARPA origins as a tool for developing disruptive new technical ideas, it launches into bizarre territory when dealing with non-technical matters. The rather self-referential draft Guide asserts research containing 19 different proposed "guidelines" based on work of a small group of people over the past two years known as the Human Rights Protocol Considerations Research Group (HRPC). The preponderance of the work and postings were those of the chair, and 2/3 of all the posts were from only five people. Whatever one might think about the initiative, it is a well-intentioned attempt by activists in several human rights arenas to articulate their interests and needs based on their conceptualisation of "the internet." At the outset of the guidelines is a clause dubbed "connectivity" that consists of an implementation of the internet "end-to-end principle." Connectivity is explained as the end-to-end principle [which] [Saltzer] holds that 'the intelligence is end to end rather than hidden in the network' [RFC1958]. The end-to-end principle is important for the robustness of the network and innovation. Such robustness of the network is crucial to enabling human rights like freedom of expression. [Amusingly, the first citation is not freely available and requires $15 to view] There are several ironies here. The Saltzer article was written in 1984 shortly after DARPA had adopted TCP and IP for use on its own highly controlled packet networks. RFC1958 was written in 1996 shortly after the DARPA Internet became widely used for NREN (National Research and Educational Network) purposes and still largely controlled by U.S. government agencies for deployments in the U.S. and its international scientific research partners. Already, the DARPA Director who had originally authorized DARPA internet development in the 1970, had become significantly concerned about it becoming part of a public infrastructure and weaponized. The concern was turned into action as CRISP (Consortium for Research on Information Security and Policy) at Stanford University. The CRISP team described in considerable detail how the DARPA internet in a global public environment was certain to be used to orchestrate all manner of network-based attacks by State and non-State actors on public infrastructures, end-users, and trust systems. Twenty years later, it is incredulous that decades-old technical papers prepared for closed or tightly managed U.S. government networks are being cited as global public connectivity mantras for human right purposes — after the predicted profoundly adverse CRISP exploits have been massively manifested. Never mind that the notion is also founded on a kind of chaotic utopian dream where running code somehow provides for unfettered communication and information capabilities for every human and object on the planet rather than business, legal, and economic systems. To the extent that global internetworking capabilities have actually come into existence, it has occurred first and foremost by commercial mobile providers and vendors using their own internet protocols, combined with the telecommunication, commercial internet, and cable providers and vendors worldwide. The "end-to-end principle" which has never really existed except as some kind of alt-truth political slogan, is plainly a recipe for disaster on multiple levels. It is disastrous because the complexities and vulnerabilities of our networking infrastructure today results in a highly asymmetric threat environment. Those possessing the massive resources and incentives to pursue those threats and "innovate," will always far exceed the ability of individual end-users to protect th[...]



Data on Cuba's SNET and a Few Suggestions for ETECSA

2017-11-10T14:42:00-08:00

What would be the impact of, say, a $100,000 equipment grant from ETECSA to SNET? I've written several posts on Cuba's user-deployed street networks (SNET), the largest of which is SNET in Havana. [1] (SNET was originally built by the gaming community, but the range of services has grown substantially). My posts and journalist's accounts like this one describe SNET, but a new paper presents SNET measurement data as well as descriptive material. The abstract of the paper sums it up: Working in collaboration with SNET operators, we describe the network's infrastructure and map its topology, and we measure bandwidth, available services, usage patterns, and user demographics. Qualitatively, we attempt to answer why the SNET exists and what benefits it has afforded its users. We go on to discuss technical challenges the network faces, including scalability, security, and organizational issues. You should read the paper — it's interesting and well-written — but I can summarize a few points that caught my attention. * * * The Street Network in Havana – Community-created map showing the service areas of several SNET pillars spanning metro Havana. Source SNET is a decentralized network comprised of local nodes, each serving up to 200 users in a neighborhood. The users connect to local nodes using Ethernet cables strung over rooftops, etc. or WiFi. The local nodes connect to regional "pillars" and the pillars peer with each other over fixed wireless links. The node and pillar administrators form a decentralized organization, setting policy, supporting users and keeping their servers running and online as best they can. (This reminds me of my school's first Web server — a Windows 3 PC on my desk that crashed frequently). SNET organization Source The average utilized bandwidth between two pillars during a 24-hour period was 120 Mb/s of a maximum throughput of 250 Mb/s and the authors concluded that throughput is generally constrained by the available bandwidth in the WiFi links between pillars. As such, faster inter-pillar links and/or adding new pillars would improve performance. Faster links from local nodes to pillars, new node servers, etc. would also add to capacity and availability, but that hardware would cost money. The Cuban government would probably see the provision of outside funds as subversive, but what would be the impact of, say, a $100,000 equipment grant from ETECSA to SNET? The paper drills down on the network topology, discusses applications and presents usage and performance statistics. Forums are one of the applications and one of the forums is Netlab, a technical community of over 6,000 registered members who have made over 81,000 posts. They focus on open-source development and have written a SNET search engine and technical guides on topics like Android device repair. The export of Cuban content and technology has been a long-standing focus of this blog, and it would be cool to see Netlab available to others on the open Internet. Netlab growth – Registration dates of Netlab users since its creation showing accelerated growth over the past year Source The authors of the paper say that as far as they know, "SNET is the largest isolated community-driven network in existence" (my italics). While it may be the largest isolated community network there are larger Internet-connected community networks and that is a shame. I hope Cuba plans to "leapfrog" to next-generation technology and policy) while implementing stopgap measures like WiFi hotspots, 3G mobile and DSL. If SNET and other community networks were legitimized, supported and linked to the Internet (or even the Cuban intranet), they would be useful stopgap technology. ETECSA could also use the skills of the street net builders. I don't expect ETECSA to take my advice, but if working with SNET is too big a step, the[...]



How a DNS Proxy Service Can Reduce the Risk of VoIP Service Downtime

2017-11-10T13:13:00-08:00

Consumers are embracing VoIP services now more than ever as they get used to calling over Internet application services such as Skype, Facetime, and Google Hangouts. Market Research Store predict that the global value of the VoIP services market is expected to reach above USD140 billion in 2021, representing a compound annual growth rate of above 9.1% between 2016 and 2021. For Cable MSOs deploying voice services, the ability to implement and manage Dynamic DNS (DDNS) is essential. However, DDNS updates pose significant challenges for large Tier 1 and 2 operators due to the difficulty of synchronizing DNS servers and DHCP servers in large "zones" or domains. When DNS servers become too difficult to manage, it often results in unreliable or even unavailable VoIP services. In cases when resynchronization is needed between DNS and DHCP servers, service downtime can take up to an hour to resolve. Customer will typically have higher quality of experience expectations when using voice services, so unwanted downtime can increase the risk of a negative experience and potentially cause customer churn. The VoIP market shows no signs of slowing its growth, so how are today's operators going to manage the increasing complexity of synchronizing DNS servers and DHCP servers? One emerging solution for today's operator is to eliminate the need for using DDNS altogether and instead deploy a DNS proxy service. These proxy services send DNS requests directly to the DHCP server, significantly simplifying the management of large DNS zones and reducing the risk of VoIP service downtime. Because the DHCP server already knows the relationship between the IP and Fully Qualified Domain Name (FQDN), since it is the authority on IP-FQDN mapping, a DNS Proxy Service can request the mapping directly from the DHCP server without the necessity of completing dynamic DNS updates and the headache of managing large DNS zones. In many cases, this simple solution can be integrated seamlessly with existing or updated network topologies, making the most of an operator's existing device provisioning investments. As a result, DNS synchronization is no longer a concern since the DHCP server is where the IP-to-FQDN assignment originates. This means increased reliability of the DNS solution, less chance of subscriber service downtime, and by association, reduced risk of customer churn. Learn more about providing higher availability of mission-critical services such as VoIP by reading the Incognito DNS Proxy Service fact sheet. Written by Pat Kinnerk, Senior Product Manager at Incognito Software SystemsFollow CircleID on TwitterMore under: Access Providers, DNS, VoIP [...]



Internet Hall of Fame Inductees Gather at GNTC to Discuss New Generation of Internet Infrastructure

2017-11-10T10:37:00-08:00

Confronted with the rapid development of the Internet, the traditional network is facing severe challenges. Therefore, it is imperative to accelerate the construction of global network infrastructure and build a new generation of Internet infrastructure to adapt to the Internet of Everything and the intelligent society. From November 28 to 30, 2017, "GNTC 2017 – Global Network Technology Conference” organized by BII Group and CFIEC, will see a grand opening in Beijing. "Global Internet Infrastructure Forum", as the most famous link of the conference, will attract several international Internet Hall of Fame inductees as well as a number of authoritative experts in the field of Internet to discuss Internet infrastructure technology changes and infrastructure development and challenges. Since human beings achieved data transmission between two computers for the first time in 1969, great changes have taken place in a few decades from military field to scientific research and even civil use, thus ushering in a brilliant chapter of "era of Internet" in history. However, as "Internet+" and industrial Internet deepen, it has become an increasingly clear trend that the Internet fully subverts all walks of life and becomes a common infrastructure. In this context, the existing architecture has exposed more and more problems in scalability, security and controllability and manageability due to its complex design, inadequate openness and low efficiency. As a result, it has constantly been improved in the industry for decades, but it is the fundamental way for future long-term development to upgrade Internet infrastructure in every aspect. In the "Global Internet Infrastructure Forum" of GNTC Conference, father of the Internet and Internet Hall of Fame inductee Vint Cerf, Father of Korean Internet and Internet Hall of Fame inductee Kilnam Chon, the inventor of DNS Internet Hall of Fame inductee Paul Mockapetris, Internet Hall of Fame inductee Paul Vixie, APNIC's Director General Paul Wilson and other global Internet authoritative experts will gather together in Beijing. Meanwhile, presidents of organizations and institutions, senior management of Internet companies and global operator representatives will also be invited to attend the conference, focusing on technological change, infrastructure development, root server, new opportunities and challenges and other directions, and exploring that how will Internet infrastructure fully upgrade to adapt to the new world of Internet of Everything in the rapid application of IPv6, SDN and other network technology. As the largest network technology event in China, there will be more than 2000 elites attending the conference. It will set up two main sessions, one roundtable forum, eight technical summits (SDN, NFV, IPv6, 5G, NB-IoT, Network Security, Cloud and Data Center, Edge Computing) and a number of workshops (P4, the Third Network, CORD, ONAP, etc.). By providing a platform for the parties to communicate and exchange, it is dedicated to promoting win-win cooperation and the process of network reconstruction. Written by Xudong Zhang, Vice President of BII GroupFollow CircleID on TwitterMore under: Cloud Computing, Cybersecurity, Data Center, Internet Governance, Internet of Things, Internet Protocol, IP Addressing, IPv6, Networks [...]



Apple (Not Surprisingly) is Not a Cybersquatter

2017-11-09T10:50:00-08:00

It's highly unusual for a well-known trademark owner to be accused of cybersquatting, but that's what happened when a Mexican milk producer filed a complaint against Apple Inc. under the Uniform Domain Name Dispute Resolution Policy (UDRP) in an attempt to get the domain name . Not only did Apple win the case, but the panel issued a finding of "reverse domain name hijacking" (RDNH) against the company that filed the complaint. The 'LA LA' Story According to the UDRP decision, Apple obtained the domain name in 2009 when it purchased the online music-streaming company La La Media, Inc. The domain name had been registered in 1996 and was acquired in 2005 by La La Media, which used it in connection with its online music service between 2006 and 2009. Although Apple stopped operating the La La Music service in 2010, and the corresponding LA LA trademarks were canceled in 2015 and 2017, Apple said that it continues to use the domain name in connection with "residual email services." Apparently seizing on the cancelled LA LA trademarks, Comercializadora de Lacteos y Derivados filed a UDRP complaint against Apple for the domain name, arguing that it "claims to have used LALA as a trademark before the registration of the Disputed Domain Name, since as early as 1987" — long before Apple acquired . The complainant further argued that Apple "registered and used the Disputed Domain Name with the bad faith intent to defraud the Complainant's customers" and that "Respondent's passive holding of the Disputed Domain Name constitutes sufficient evidence of its bad faith use and registration." Apple's 'LA LA' Rights The UDRP panel rejected these arguments, as well as those related to the UDRP's "rights or legitimate interests" requirement, finding that the complainant had "put these assertions forward without any supporting argumentation or evidence." Importantly, the panel wrote: The Panel is of the opinion that, between June 2006 and May 2010, Respondent and its predecessor-in-interest made legitimate use of the Disputed Domain Name to offer bona fide services under its own LA LA mark. These services are unrelated to the Complainant and its LALA mark. The Panel also wrote: [T]he fact that the Respondent chose to cease active use of the Disputed Domain Name does not demonstrate in itself that the Respondent has no rights or legitimate interests in the Disputed Domain Name. It is common practice for trademark holders to maintain the registration of a domain name, even if the corresponding trademark was abandoned, e.g., following a rebranding exercise. Apart from the goodwill that might be associated to the trademark, the domain name in question may have intrinsic value. In the case at hand, the Panel notes that the term "la-la" is often used as a nonsense refrain in songs or as a reference to babbling speech, and that there are many concurrent uses of the "LALA" sign as a brand. In such circumstances, a domain name holder has a legitimate interest to maintain the registration of a potentially valuable domain name. (Interestingly, the panel said nothing about "La La Land," the 2016 movie that won six Academy Awards — and which uses the domain name .) After its conclusion in favor of Apple, allowing the computer company to keep the domain name, the panel found that the "Complainant was, or should have been aware, of [Apple]'s bona fide acquisition and use of the Disputed Domain Name" and that it "must have been aware, before filing the Complaint, that the Disputed Domain Name has never be[en] used to target the Complainant or trade on its goodwill." As a result of this finding, the panel said that the Complainant had engaged [...]



Qatar Crisis Started With a Hack, Now Political Tsunami in Saudi Arabia - How Will You Be Impacted?

2017-11-09T10:35:00-08:00

The world has officially entered what the MLi Group labels as the "New Era of The Unprecedented". In this new era, traditional cyber security strategies are failing on daily basis, political and terrorist destruction-motivated cyber attacks are on the rise threatening "Survivability", and local political events unfold to impact the world overnight and forever. Decision makers know they cannot continue doing the same old stuff, but don't know what else to do next or differently that would be effective.

Deloitte and Equifax are giants who discovered the hard way they were not immune. The Qatar crisis with damage in $Billions was triggered by a cyber-attack which a Washington Post report claims was perpetrated by its neighbor the UAE. Now comes the Saudi Tsunami with ramification that will impact stakeholders worldwide. If you thought these events in Far Far Away lands don't impact you and your businesses, then I suggest you take your head out of the sand, and fast.

Local Geo-Political events are sending shockwaves globally. To learn what to be on the look out for, or learn how you can mitigate them watch the MLi Group's "Era of the Unprecedented" Video on the Saudi Tsunami by clicking here.

Written by Khaled Fattal, Group Chairman, The Multilingual Internet Group

Follow CircleID on Twitter

More under: Censorship, Cyberattack, Cybersecurity, Data Center, Internet Governance




Why Aren't We Fixing Route Leaks?

2017-11-08T14:55:00-08:00

In case you missed it (you probably didn't), the Internet was hit with the Monday blues this week. As operator-focused lists and blogs identified, "At 17:47:05 UTC yesterday (6 November 2017), Level 3 (AS3356) began globally announcing thousands of BGP routes that had been learned from customers and peers and that were intended to stay internal to Level 3. By doing so, internet traffic to large eyeball networks like Comcast and Bell Canada, as well as major content providers like Netflix, was mistakenly sent through Level 3's misconfigured routers." In networking lingo, a "route leak" had occurred, and a substantial one at that. Specifically, the Internet was the victim of a Type 6 route leak, where: "An offending AS simply leaks its internal prefixes to one or more of its transit-provider ASes and/or ISP peers. The leaked internal prefixes are often more-specific prefixes subsumed by an already announced, less-specific prefix. The more-specific prefixes were not intended to be routed in External BGP (eBGP). Further, the AS receiving those leaks fails to filter them. Typically, these leaked announcements are due to some transient failures within the AS; they are short-lived and typically withdrawn quickly following the announcements. However, these more-specific prefixes may momentarily cause the routes to be preferred over other aggregate (i.e., less specific) route announcements, thus redirecting traffic from its normal best path." In this case, the painful result was significant Internet congestion for millions of users in different parts of the world for about 90 minutes. One of the main culprits apparently fessed up, with CenturyLink/Level 3 quickly issuing a reason for the outage (I pity "that guy," being a network engineer at the world's largest ISP ain't easy). Can't we fix this? Route leaks are a fact of life on the Internet. According to one ISP's observations, on any given day of the week, between 10-20% of announcements are actually leaks. Type 6 route leaks can be alleviated in part by technical and/or operational measures. For internal prefixes never meant to be routed on the Internet, one suggestion is to use origin validation to filter leaks, but this requires adoption of RPKI and only deals with two specific types of leak. Source: Job Snijders, "Everyday practical BGP filtering" presented at NANOG 67From a contractual and operational perspective, Level 3's customers and others affected are presumably closely scrutinizing their SLAs. Maybe this episode will incentivize Level 3 to take some corrective action(s), like setting a fail-safe maximum announcement limit on their routers to catch potential errors. Perhaps Level 3's peering partners are similarly considering reconfiguring their routers to not blindly accept thousands of additional routes. Although, the frequency or other characteristics of changes in routing announcements might make this infeasible. Another potential solution requiring broader collective action is NTT's peer locking, where NTT prevents leaked announcements from propagating further by filtering on behalf of other ISPs with which it has an agreement. It's an approach that is mutually beneficial. Much of the routing chaos could have been prevented if peer locking arrangements had been in place between NTT (or other large backbone ISPs peering with Level 3) and any of the impacted ASes (e.g., Comcast had ~20 impacted ASes). NTT has apparently had some success with the approach, having arrangements with many of the world's largest carriers of Internet traffic. In one case where they deployed peer locking, the number of route leaks has decreased by an order of magnitude. Moreover, the approach is apparently being replicated by other large carriers. Reg[...]



A Glance Back at the Looking Glass: Will IP Really Take Over the World?

2017-11-06T13:15:00-08:00

In 2003, the world of network engineering was far different than it is today. For instance, EIGRP was still being implemented on the basis of its ability to support multi-protocol routing. SONET, and other optical technologies were just starting to come into their own, and all-optical switching was just beginning to be considered for large-scale deployment. What Hartley says of history holds true when looking back at what seems to be a former age: "The past is a foreign country; they do things differently there." In the midst of this change, the Association for Computing Machinery (the ACM) published a paper entitled "Will IP really take over the world (of communications)?" This paper, written during the ongoing discussion within the engineering community over whether packet switching or circuit switching is the "better" technology, provides a lot of insight into the thinking of the time. Specifically, as the author says, the belief that IP is better: … is based on our collective belief that packet switching is inherently superior to circuit switching because of the efficiencies of statistical multiplexing, and the ability of IP to route around failures. It is widely assumed that IP is simpler than circuit switching, and should be more economical to deploy and manage. Several of the reasons given are worth considering in light of the intervening years between the paper being written and today. Section 2 of the paper suggest four myths that need to be considering in the world of IP based packet switching. The first of these myths is that IP is more efficient. Here the authors point out an early argument used by packet switching advocates: when a circuit is idle, reserved bandwidth is unused. Because packets do not reserve bandwidth in this way, packet switching makes more efficient use of the available resources. The authors counter this argument by observing packets switched networks are not that heavily utilized, and the exploring why this might be so. The reasons they give are that packet switched networks become unstable if they are overutilized, and the desire to drive delay down as much as possible. The authors say: But simply reducing the average link utilization will not be enough to make users happy. For a typical user to experience low utilization, the variance of the network utilization needs to be low, too. Reducing variations in link utilization is hard; today we lack effective techniques to do it. It might be argued that the problem will be solved by research efforts on traffic management, congestion control, and multipath routing. But to-date, despite these problems being understood for many years, effective measures are yet to be introduced. The second of these myths is that IP is stable. Here the authors point out the many potential problems with packet switched networks, particularly in the area of in-band signaling. The third myth the authors consider is that IP is simpler. Here the argument is that circuit switched networks have fewer moving parts, each of which can be more regimented in their implementation. This means circuit switched networks are generally simpler than their packet-switched counterparts. The fourth myth the authors consider is that real-time traffic will be able to run over IP based networks. They argue that because of the various quality of service problems, near real-time traffic, such as voice and video, can never be carried over an IP network. What did the authors do get right, and what did they get wrong? Looking back over the intervening years, some of these problems seem to have been largely solved. For instance, between research into new quality of service solutions, virtualization technologies, and researc[...]



STEM to STEMM: It Will Take Musicians to Save the Internet

2017-11-06T09:46:00-08:00

The internet is under all kinds of attacks from all kinds of people for all kinds of reasons. It’s not just the internet’s infrastructure that is under attack, so too is the very concept of the internet as an open communications platform serving the commonweal. Constructing effective technical defenses of the internet will require that America’s students learn and develop the quantitative disciplines known as STEM; Science, Technology, Engineering, and Mathematics. Constructing effective, ethical defenses of the internet will require that students study art and philosophy. The two educational paths are symbiotic; increasing student participation in STEM education will require tearing down the thickening academic walls that separate the arts from the sciences. Policymakers should build on Carnegie’s pedagogical model of combining STEM disciplines with the arts. Music has been understood since ancient times to be mathematical beauty made audible. The actress and singer-songwriter Minnie Driver explained in a White House blog post that “Without music in my curriculum, I never would have understood math. I am so grateful to the teacher who ... encouraged me to explore my love of music as a way to help unscramble my block with mathematics.” A century ago, a notable academic institution set an example in how to develop STEM students. Long before it was a university, the Carnegie Institute of Technology included a drama department as part of its core educational mission. The US’s “oldest conservatory training, and the first degree-granting drama institution...” was established in 1914 by one of America’s greatest STEM educational institutions. The National Science and Technology Council’s 2013 Five Year Strategic Plan for advancing STEM education had something in common with one of the council’s earlier reports and with STEM education reports from Congress. They fail to mention one word: Music. How often is a young man pounding out complex polyrhythms profiled as a potential architect? How often is a young woman who slices and dices beats on her smartphone profiled as a cryptographer? How often are young coronet players profiled as the government’s next differential game theorists? Why not? How does the government define the term “STEM”? GAO compiled a list of 209 federal programs “designed to increase knowledge of STEM fields and attainment of STEM degrees.” In developing its survey methodology to identify the programs, GAO “determined that a STEM field should be considered any of the following” ten “broad disciplines” as well as certain health care programs. The ten broad disciplines include mathematics, engineering, and technology as well as basic and applied sciences including behavioral and cognitive sciences and other social sciences. GAO used an inclusive approach to defining STEM fields and a painstaking survey methodology to identify specific programs. A review of the 209 STEM programs makes evident that, for purposes of identifying quantitatively gifted students, the federal government has divorced the arts from the sciences. The Executive Branch’s centralized regulatory review process is capable of playing a lead role in ensuring that music education is recognized as a STEM program. National security statutes promoting STEM education don’t contain any statutory bars to the executive branch recognizing that music is part of mathematics. Thus, OMB should consider issuing guidance directing agencies to recognize music education as qualifying for STEM-related funding unless contrary to law. In short, STEM could and should become STEMM; Science, Technology, Engineering, Mathematics, [...]



Google Can, at Least for Now, Disregard Canadian Court Order Requiring Deindexing Worldwide

2017-11-03T06:52:00-08:00

U.S. federal court issues preliminary injunction, holding that enforcement of Canadian order requiring Google to remove search results would run afoul of the Communications Decency Act (at 47 U.S.C. 230) Canadian company Equustek prevailed in litigation in Canada against rival Datalink on claims relating to trade secret misappropriation and unfair competition. After the litigation, Equustek asked Google to remove Datalink search results worldwide. Google initially refused altogether, but after a Canadian court entered an injunction against Datalink, Google removed Datalink results from google.ca. Then a Canadian court ordered Google to delist worldwide, and Google complied. Google objected to the order requiring worldwide delisting, and took the case all the way up to the Canadian Supreme Court, which affirmed the lower courts' orders requiring worldwide delisting. So Google filed suit in federal court in the United States, seeking a declaratory judgment that being required to abide by the Canadian order would, among other things, be contrary to the protections afforded to interactive computer service providers under the Communications Decency Act, at 47 U.S.C. 230. The court entered the preliminary injunction (i.e., it found in favor of Google pending a final trial on the merits), holding that (1) Google would likely succeed on its claim under the Communications Decency Act, (2) it would suffer irreparable harm in the absence of preliminary relief, (3) the balance of equities weighed in its favor, and (4) an injunction was in the public interest. Section 230 of the Communications Decency Act immunizes providers of interactive computer services against liability arising from content created by third parties. It states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." [More info about Section 230] The court found that there was no question Google is a "provider" of an "interactive computer service." Also, it found that Datalink — not Google — "provided" the information at issue. And finally, it found that the Canadian order would hold Google liable as the "publisher or speaker" of the information on Datalink's websites. So the Canadian order treated Google as a publisher, and would impose liability for failing to remove third-party content from its search results. For these reasons, Section 230 applied. Summarizing the holding, the court observed that: The Canadian order would eliminate Section 230 immunity for service providers that link to third-party websites. By forcing intermediaries to remove links to third-party material, the Canadian order undermines the policy goals of Section 230 and threatens free speech on the global internet. The case provides key insight into the evolving legal issues around global enforcement and governance. Google, Inc. v. Equustek Solutions, Inc., 2017 WL 5000834 (N.D. Cal. November 2, 2017) Written by Evan D. Brown, AttorneyFollow CircleID on TwitterMore under: Intellectual Property, Internet Governance, Law, Web [...]



Confusing Similarity of Domain Names is Only a 'Standing Requirement' Under the UDRP

2017-11-02T14:43:00-08:00

WIPO's newest overview of the Uniform Domain Name Dispute Resolution Policy (UDRP) succinctly states what decisions have made clear through the years: The UDRP's first test is only a "standing requirement." Standing, under the law, simply means that a person or company is qualified to assert a legal right. It does not mean or imply that one will necessarily prevail on any claims. The UDRP includes a well-known three-part test that all trademark owners must satisfy to prevail, but the first element has a low threshold. Specifically, that test requires a complainant to establish that the disputed "domain name is identical or confusingly similar to a trademark or service mark in which [it] has rights." The 'Entirety of a Trademark' UDRP decisions sometimes contain lengthy discussions about whether the "confusingly similar" test has been satisfied, but WIPO Overview 3.0 indicates that the analysis is really quite simple: It is well accepted that the first element functions primarily as a standing requirement. The standing (or threshold) test for confusing similarity involves a reasoned but relatively straightforward comparison between the complainant's trademark and the disputed domain name. This test typically involves a side-by-side comparison of the domain name and the textual components of the relevant trademark to assess whether the mark is recognizable within the disputed domain name.... While each case is judged on its own merits, in cases where a domain name incorporates the entirety of a trademark, or where at least a dominant feature of the relevant mark is recognizable in the domain name, the domain name will normally be considered confusingly similar to that mark for purposes of UDRP standing. [Emphasis added] This summary doesn't differ dramatically from the previous version of WIPO's overview, which stated that the first test serves "essentially as a standing requirement." But the reference to a domain name containing the "entirety of a trademark" is new. The WIPO overview "summarize[s] consensus panel views," so it is often seen as an important authority on substantive and procedural issues under the UDRP. Therefore, the references to the "entirety of a trademark" must be taken seriously. 'Subsumed Within' a Phrase Because the confusing similarity test is only a standing requirement under the UDRP, trademark owners should have little difficulty (in appropriate cases, of course) convincing a panel that this element has been satisfied. For example, in finding the domain name confusingly similar to the trademark DUNHILL, the panel rejected the respondent's argument that it "does not use the word DUNHILL standing alone but with the preceding word 'richard'," which "is distinctive and distinguishing." In finding confusing similarity, the panel said that the threshold imposed by the UDRP's first element "is conventionally modest, requiring an objective assessment of whether, for example, the trademark is clearly recognizable in the disputed domain name, even in the presence of additional words or strings." Interestingly, the new WIPO overview deleted from the previous version a reference to a hypothetical example in which a domain name might not be confusingly similar to a trademark. The old overview said: "While each case must be judged on its own merits, circumstances in which a trademark may not be recognizable as such within a domain name may include where the relied-upon mark corresponds to a common term or phrase, itself contained or subsumed within another common term or phrase in the domain name (e.g., trademark HEAT within domai[...]



Enabling Privacy Is Not Harmful

2017-11-02T09:32:00-08:00

The argument for end-to-end encryption is apparently heating up with the work moving forward on TLSv1.3 currently in progress in the IETF. The naysayers, however, are also out in force, arguing that end-to-end encryption is a net negative. What is the line of argument? According to a recent article in CircleID, it seems to be something like this: Governments have a right to asymmetrical encryption capabilities in order to maintain order. In other words, governments have the right to ensure that all private communication is ultimately readable by the government for any lawful purpose. Standards bodies that enable end-to-end encryption that will prevent this absolute governmental good endanger society. The leaders of such standards bodies may, in fact, be prosecuted for their role in subverting government power. The idea of end-to-end encryption is recast as a form of extremism, a radical idea that should not be supported by the network engineering community. Is end-to-end encryption really extremist? Is it really a threat to the social order? Let me begin here: this is not just a technical issue. There are two opposing worldviews in play. Engineers don't often study worldviews or philosophy, so these questions tend to get buried in a lot of heated rhetoric. In the first, people are infinitely malleable, and will be or should be shaped by someone, with the government being the most logical choice, into a particular moral mold. In this view, the government must always have asymmetry; if any individual citizen, or any group of citizens, can stand against the government, then the government is under direct existential threat. By implication, if government is the founding order of a society, then society itself is at risk. In the second, the government arises out of the moral order of the people themselves. In this view, the people have the right to subvert the government; this subversion is only a problem if the people are ethically or morally incompetent in a way that causes such undermining to destroy the society. However, the catch in this view is this: as the government grows out of the people, the undermining of the government in this situation is the least of your worries. For if the society is immoral, the government — being made up of people drawn from the society — will be immoral as a matter of course. To believe a moral government can be drawn from an immoral population is, in this view, the height of folly. What we are doing in our modern culture is trying to have it both ways. We want the government to provide the primary ordering of our society, but we want the people to be sovereign in their rights, as well. Leaving aside the question of who is right, this worldview issue that cannot be solved on technical grounds. How do we want our society ordered? Do we want it grounded in individuals who have self-discipline and constraint, or in government power to control and care for individuals who do not have self-discipline and constraint? The question is truly just that stark. Now, to the second point: what of the legal basis laid out in the CircleID article? The author points to a settlement around the 3G standard where one participant claimed their business was harmed because a location tracking software was not considered for the standard, primarily because the members of the standards body did not want to enable user tracking in this way. The company stated the members of the standards body acted in a way that was a conspiracy. Hence the actions of the standards body fell under anti-trust laws. Since there was a settlement, there w[...]



Internet Society Seeks Nominations for 2018 Board of Trustees

2017-11-01T11:16:01-08:00

Are you passionate about preserving the global, open Internet? Do you have experience in Internet standards, development or public policy? If so, please consider applying for one of the open seats on the Internet Society Board of Trustees.

The Internet Society serves a pivotal role in the world as a leader on Internet policy, technical, economic, and social matters, and as the organizational home of the Internet Engineering Task Force (IETF). Working with members and Chapters around the world, the Internet Society promotes the continued evolution and growth of the open Internet for everyone. The Board of Trustees provides strategic direction, inspiration, and oversight to advance the Society's mission.

In 2018:

  • the Internet Society's chapters will elect one Trustee;
  • its Organization Members will elect one Trustee, and
  • the IETF will select two Trustees.

Membership in the Internet Society is not required to nominate someone (including yourself), to stand for election, or to serve on the Board. Following an orientation program, all new Trustees will begin 3-year terms commencing with the Society's annual general meeting in June 2018.

Nominations close at 15:00 UTC on December 15, 2017. Find out more by reading the Call for Nominations and other information available at: https://www.internetsociety.org/trustees

Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society

Follow CircleID on Twitter

More under: Internet Governance




Reverse Domain Hijacking Where Complainant Knew but Did Not Disclose Geographic Significance of Mark

2017-11-01T10:09:00-08:00

In the case of Oy Vallila Interior Ab v. Linkz Internet Services, a 3-member WIPO Panel denied the Complainant's efforts to have the disputed domain name transferred because the Complainant did not prove that the Respondent registered and used the disputed domain name in bad faith. The Complainant is in the business of providing fabrics and interior design services and claimed trademark rights in its registered mark VALLILA in the European Union. The Respondent registered the disputed domain name in 2005 and used it to establish a website that contained sponsored links. In response to an anonymous inquiry made on behalf of the Complainant, the Respondent offered to sell the disputed domain name for USD $32,000. The Panel found that the Complainant failed to demonstrate that the Respondent registered and used the disputed domain name in bad faith. Vallila is the name of a geographic location, namely, an inner suburb of Helsinki, Finland. The Complainant did not disclose in its complaint that its mark was also a place name, though correspondence introduced by the Respondent showed the Complainant had that knowledge. There was no evidence the disputed domain name was used to take advantage of the trademark significance of the Complainant's disputed domain name. The web page at the disputed domain name contained sponsored links — but only about a third of them referred to Finland. The Panel observed that in an ordinary case, that amount of connection to the place name would not be enough to show geographic use of the domain name. But the Complainant did not advance anything sufficiently concrete to link the Respondent with specific knowledge of the Complainant. Even if the Respondent did have such knowledge, it may well also have had knowledge of the geographic significance of the term Vallila and the many other businesses operating there which have Vallila in their name. The price sought for the disputed domain name, therefore, could not necessarily be attributed to the disputed domain name's resemblance to the Complainant's trademark. The Panel went on to find that the Complainant brought the UDRP proceeding in bad faith. The Complainant had argued that the only possible reason for registering the disputed domain name was to take advantage of its significance as the Complainant's trademark. Moreover, the Complainant relied on the communication from the Respondent's broker offering the disputed domain name for sale for USD $32,000 as evidence of both registration and use in bad faith. But the Panel found that failure to disclose the geographic significance of the name had the potential to mislead the Panel in a way which could be, and in this case was, highly material to the Panel's decision. The panel found there to be reverse domain name hijacking. Oy Vallila Interior Ab v. Linkz Internet Services, WIPO Case No. D2017-1458 Written by Evan D. Brown, AttorneyFollow CircleID on TwitterMore under: Domain Names, Intellectual Property, UDRP [...]