Would you be interested in helping guide the future of the Public Interest Registry (PIR), the non-profit operator of the .ORG, .NGO and .ONG domains? If so, the Internet Society is seeking nominations for three positions on the PIR Board of Directors. The nominations deadline is Sunday, December 11, 2016.
More information about the positions and the required qualifications can be found at: http://www.internetsociety.org/call-nominations-pir-board-directors
As noted on that page:
The Internet Society is now accepting nominations for the Board of Directors of the Public Interest Registry (PIR). PIR's business is to manage the international registry of .org, .ngo, and .ong domain names, as well as associated Internationalized Domain Names (IDNs), and the new OnGood business.
In 2017 there are three positions opening on the PIR Board. Directors will serve a 3-year term that begins in April 2017 and expires in April 2020.
If you are interested in being considered as a candidate, please see the form to submit toward the bottom of the info page.
P.S. In full disclosure, the Internet Society is my employer but I have no direct connection to PIR and am passing this along purely because I think members of the CircleID community of readers might be excellent candidates for these positions.
Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society
Follow CircleID on Twitter
2016-12-02T14:50:00-08:00This post is conjecture, but it is informed conjecture Consider the following: • When Google Fiber started in Kansas City, most people assumed that it was a demonstration project, intended to spur investment by the incumbent US Internet service providers (ISPs). Few thought that Google wanted to become a retail ISP. • Google Fiber garnered a lot of publicity and Google, began speaking of it as a real, profit-making business. They announced other cities and started laying fiber in some of them. • Last June, Google bought Webpass, a small ISP that deploys fiber and was experimenting with unproven, but perhaps revolutionary pCell wireless technology from Artemis Networks. I speculated that they might be thinking of shifting Google Fiber to a hybrid fiber-wireless model based on that acquisition and other experiments they were conducting. • Last October Google Fiber announced that their work would continue in cities where they had launched or were under construction, but they would "pause operations and offices" in cities in which they had been conducting exploratory discussions and they took many, but not all workers off the Google Fiber project. • Google's Project Link has installed wholesale fiber backbones in two African capitals and I have suggested and speculated that they might do the same in Havana (with the caveat that they do it in conjunction with ETECSA since there are no competing retail ISPs in Cuba as there are in Africa). • Last July ETECSA announced that they would be running a fiber trial in parts of Old Havana. They did not specify if it was fiber to the premises or neighborhood. • A month ago, a friend told me that a friend of his who worked at ETECSA said the fiber trial would begin December 5. • Last week, Trump threatened to "terminate the deal" (whatever that means to him) if Cuba would not make it better. • Yesterday, nearly identical stories suggesting that the White House was pushing Cuba on deals with Google and General Electric were published in the Wall Street Journal and El Nuevo Herald. That is all for real — now for the conjecture ... Maybe the trial in Old Havana will be a joint project between Google and ETECSA. Google has considerable fiber installation experience with Project Link in Africa and Google Fiber in the US. A joint project with ETECSA would be relatively simple because they would not have to deal with competing ISPs as in Africa or lawsuits and other obstacles from incumbent ISPs as in the United States. It could either be a pilot experiment — a trial — or the first step in leapfrogging Havana's connectivity infrastructure. One can imagine Google installing a fiber backbone in Havana like they have done in Accra and Kampala and leaving it up to ETECSA to connect premises using a mix of fiber, coaxial cable and wireless technology. If that were to happen, Havana could "leapfrog" from one of the worst connected capital cities in the world to a model of next-generation technology. If things went well in Havana, which city would be next? The partnership between Google and ETECSA could take many forms. Google might supply expertise and capital and ETECSA could supply labor and deal with the Cuban and Havana bureaucracies. In return, Google would get terrific publicity, a seat at the table when other Cuban infrastructures like data centers or video production facilities were discussed and more users to click on their ads. (Take that Facebook). Havana could also serve as a model and reference-sell for cooperation between Google and other cities. (Take that Comcast and AT&T). There might even be some revenue sharing, with ETECSA paying Google as the ISPs do in Africa. This would also be a win for the US administration and President Obama's legacy. Trump says he wants to renegotiate "the deal" with Cuba. If so, he would find Google (and GE?) at the negotiating table along with US airlines, telephone companies, hotel chains, cruise lines, etc. Again [...]
2016-11-30T10:39:00-08:00It's not particularly clear whether a marketing intern thought he was being clever or a fatigued pentester thought she was being cynical when the term "Purple Team Pentest" was first thrown around like spaghetti at the fridge door, but it appears we're now stuck with the term for better or worse. Just as the definition of penetration testing has broadened to the point that we commonly label a full-scope penetration of a target's systems with the prospect of lateral compromise and social engineering as a Red Team Pentest — delivered by a "Red Team" entity operating from a sophisticated hacker's playbook. We now often acknowledge the client's vigilant security operations and incident response team as the "Blue Team" — charged with detecting and defending against security threats or intrusions on a 24x7 response cycle. Requests for penetration tests (Black-box, Gray-box, White-box, etc.) are typically initiated and procured by a core information security team within an organization. This core security team tends to operate at a strategic level within the business — advising business leaders and stakeholders of new threats, reviewing security policies and practices, coordinating critical security responses, evaluating new technologies, and generally being the go-to-guys for out-of-ordinary security issues. When it comes to penetration testing, the odds are high that some members are proficient with common hacking techniques and understand the technical impact of threats upon the core business systems. These are the folks that typically scope and eventually review the reports from a penetration test — they are however NOT the "Blue Team", but they may help guide and at times provide third-line support to security operations people. No, the nucleus of a Blue Team are the front-line personnel watching over SIEM's, reviewing logs, initiating and responding to support tickets, and generally swatting down each detected threat as it appears during their shift. Blue Teams are defensively focused and typically proficient at their operational security tasks. The highly-focused nature of their role does however often mean that they lack what can best be described as a "hackers eye view" of the environment they're tasked with defending. Traditional penetration testing approaches are often adversarial. The Red Team must find flaws, compromise systems, and generally highlight the failures in the targets security posture. The Blue Team faces the losing proposition of having to had already secured and remediated all possible flaws prior to the pentest, and then reactively respond to each vulnerability they missed — typically without comprehension of the tools or techniques the Red Team leveraged in their attack. Is it any wonder that Blue Teams hate traditional pentests? Why aren't the Red Team consultants surprised that the same tools and attack vectors work a year later against the same targets? A Purple Team Pentest should be thought of as a dynamic amalgamation of Red Team and Blue Team members with the purpose of overcoming communication hurdles, facilitating knowledge transfer, and generally arming the Blue Team with newly practiced skills against a more sophisticated attacker or series of attack scenarios. How to Orchestrate a Purple Team Pentest Engagement Very few organizations have their own internal penetration testing team and even those that do, regularly utilize external consulting companies to augment that internal team to ensure the appropriate skills are on hand and to tackle more sophisticated pentesting demands. A Purple Team Pentest almost always utilizes the services of an external pentest team — ideally one that is accomplished and experienced in Red Team pentesting. Bringing together two highly skilled security teams — one in attack, the other in defense — and having them not only work together, but to also achieve all the stated goals of a Purple Team pentest, requires planning and leadership.[...]
2016-11-30T07:47:00-08:00It should come as no surprise that the Federal Communications Commission will substantially change its regulatory approach, wingspan and philosophy under a Trump appointed Chairman. One can readily predict that the new FCC will largely undo what has transpired in previous years. However, that conclusion warrants greater calibration. As a threshold matter, the new senior managers at the FCC will have to establish new broad themes and missions. They have several options, some of which will limit how deregulatory and libertarian the Commission can proceed. Several ways forward come to mind: Channeling Trump Populism – the FCC can execute President Trump's mission of standing up to cronyism and rent seeking, even when it harms traditional constituencies and stakeholders. What's Good for Incumbents is Good for America – the FCC can revert to the comfortable and typical bias in favor of incumbents like Comcast, Verizon, AT&T and the major broadcast networks. A Libertarian Credo – the FCC can reduce its regulatory wingspan, budget and economic impact by concentrating on limited core statutory mandates, such as spectrum management. Humility – without having the goal of draining the FCC's pond, senior managers can temper their partisanship and snarkiness by refraining from mission creep. Each of the above scenarios hints at major and equally significant, but unpublicized changes at the agency. A populist FCC equates the public interest with what the court of public opinion supports. For example, most consumers like subsidies that make products and services appear free. A populist FCC responds to consumers by interpreting network neutrality rules as allowing zero rating and sponsored data plans. However, a populist FCC risks overemphasis on public opinion that stakeholders can energize as occurred when companies like Netflix and Google used their websites for 24/7 opposition to the Stop Online Piracy Act and when Jon Oliver motivated 4 million viewers to file informal comments favoring network neutrality on the overburdened FCC website. On the other hand, a populist FCC can remind rural residents of how much they count in this new political environment. The FCC can validate rural constituencies by refraining from modifying — if not eliminating — inefficient and poorly calibrated universal service cross-subsidies. Most telephone subscribers in the U.S. do not realize that they are paying a 10%+ surcharge on their bills to support universal service funding, most of which flows to incumbent telephone companies. Consumers would quickly contract compassion fatigue if knew about this sweetheart arrangement. The favoring incumbents scenario has a long and tawdry history at the FCC. If the new FCC reverts to this model, the Commission will largely give up fining companies for regulatory violations. Additionally, it might purport to reintroduce economic analysis to its decision making by adopting incumbent-advocated, but highly controversial templates. For example, incumbents have touted the "Rule of 3" to support further industry consolidation. This rule is nothing more than an advocacy viewpoint that markets with 3 competitors generate most of the consumer benefits accruing from markets with more than 3 competitors. Having only 3 competitors may work if 1 of them does not collude and match the terms, conditions and prices offered by the other 2. But in many markets — think commercial aviation — having only 3 operators risks markets organized to extract maximum revenues from consumers with little incentive to innovate and compete. An incumbent friendly FCC likely will approve mergers and acquisitions with limited, if any, conditions and negotiated conditions. This kind of FCC will approve AT&T's acquisition of Time Warner despite President Trump's disapproval. The FCC probably also would have no problem with a wireless marketplace duopoly of AT&T and Verizon controlling 90+% of the nat[...]
2016-11-29T14:51:00-08:00Even those who care about net neutrality might not have heard of the aptly-called Shadow Regulations. These back-room agreements among companies regulate Internet content for a number of legitimate issues, including curbing hate speech, terrorism, and protecting intellectual property and the safety of children. While in name they may be noble, in actuality there are very serious concerns that Shadow Regulations are implemented without the transparency, accountability, and inclusion of stakeholders necessary to protect free speech on the Internet. A recent SF-Bay Internet Society (ISOC) Chapter event, co-hosted by the Electronic Frontier Foundation (EFF) in collaboration with the global Internet Society, put the spotlight on how to improve these agreements. The keynote speakers from EFF, Mitch Stoltz, Senior Staff Attorney and Jeremy Malcolm, Senior Global Analyst, acknowledged that there is a place for Shadow Regulations in an open Internet, but not without some serious modifications. After all, the basis of the Internet is the voluntary adoption of standards, and Shadow Regulations have the benefit of crossing borders and being more flexible, cheaper, and faster than traditional legislation. These regulations can take many forms, including codes, standard, principles, and memorandums of understanding (MOUs), and can pop up at many vulnerable links across the Internet, which the EFF calls Free Speech Weak Links. So when should the public be concerned about Shadow Regulations encroaching on Internet freedoms? Whenever there is no space for transparency, accountability and user participation, very shady Shadow Regulations can be implemented. Take, for example, policy laundering: when governments want to implement unethical policies, such as curtailing freedom of speech, they can place the blame on companies through these regulations. Stoltz explained, "It's an abdication of responsibility to pressure platforms like Facebook to come up with a policy and enforce it while government washes its hands [of any responsibility]." When governments are backing these agreements, they're not necessarily voluntary, as companies might be engaging to curry governmental favor. In the current system, an industry can restrict content and then prop itself up as judge, jury, and executioner. Spreading these roles across impartial bodies with multi-stakeholder processes is one obvious solution. This requires balance, inclusion, and accountability: one stakeholder cannot overpower the others, the right stakeholders have to participate and be given resources to participate, and there need to be standards set that keep the body and stakeholders accountable from and to each other. In some cases, Shadow Regulations won't be the most effective solution: for example, in the case of hate speech, it may be more effective to empower users to limit their exposure to it rather than trying to erase it off the Internet. * * * Learn more about what the EFF is doing with its Shadow Regulation Project and watch the video from this event. Become a member of the San Francisco-Bay Area Internet Society Chapter to to support more events like this. About the SF-Bay Area Chapter: The San Francisco Bay Area ISOC Chapter serves California, including the Bay Area and Silicon Valley, by promoting the core values of the Internet Society. Through its work, the Chapter promotes open development, evolution and access to the Internet for those in its geographical region and beyond. This article was written by Jenna Spagnolo to support the SF Bay ISOC Chapter. Written by Jenna SpagnoloFollow CircleID on TwitterMore under: Censorship, Internet Governance, Net Neutrality, Policy & Regulation, Web [...]
2016-11-29T13:23:00-08:00I've written posts about trolls in Cuba, where Operation Truth is said to use a thousand university-student trolls and trolls in China where government workers fabricate an estimated 488 million social media posts annually. Now we are reading about Russian government trolls. Just before the election, this post documented Russian trolling and warned that "Trump isn't the end of Russia's information war against America. They are just getting started." "In Internet slang, a troll (/ˈtroʊl/, /ˈtrɒl/) is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the intent of provoking readers into an emotional response…" Internet troll, Wikipedia https://en.wikipedia.org/wiki/Internet_troll After the election a new site, PropOrNot.com (propaganda or not) came online. Their mission is outing Russian propaganda using a combination of forensic online sleuthing and crowdsource reporting and they have compiled a list of 200 sites that rapidly spread stories written by Russian trolls. (More about PropOrNot here). But, is PropOrNot what it claims to be? The people behind the site remain anonymous (for understandable reasons) and their domain name registration is private. How do they determine that a site is home for Russian content? Is there a chance that they are pro-Clinton, sour-grapes trolls? Might trolls and hackers figure out ways to game ProOrNot and get sites they oppose blacklisted? Hmmm — I wonder if the US government hires trolls and, if not, should they? How about Canada? Chile? Zambia? How about Exxon Mobile trolls or McDonalds trolls? Is it trolls all the way down? The fake news and trolling revealed during the last few months of the US political campaign has sowed doubts about everything we see and read online. We're beginning the transition from "critical thinking" to "paranoid thinking." In 1961, Newton N. Minow gave a talk to the National Association of Broadcasters in which he worried that television was becoming a "vast wasteland:" But when television is bad, nothing is worse. I invite each of you to sit down in front of your television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland. Will the Internet become a vast wasteland? Newton Minnow was correct, but there were and still are oases in the television wasteland. In spite of the trolls, fake news sites, troll-bots, etc. the Internet is and will remain replete with oases, but we cannot ignore the wasteland. * * * Update: I reached out to PropOrNot, pointing out that they do not identify themselves and their domain registration is private and asking how I could know they were not posting false claims themselves. They replied that "We sometimes provide much more background information about ourselves to professional journalists." They have now posted a document on their methodology, showing how they select sites for their list. They are not saying the sites are paid trolls, but that they publish information that originates on Russian government sites — that they disseminate Russian propaganda. At least one of the sites on their list, The Corbett Report, has refuted the claim that they are pro-Russian, but they do not address the question of their distributing material that originated on Russian sites. Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Web [...]
2016-11-29T10:27:00-08:00Not infrequently heard in domain name disputes are cries of shock and gnashing of teeth that domain name holders may lawfully offer their inventory at excessive prices. Take for example TOBAM v. M. Thestrup / Best Identity, D2016-1990 (WIPO November 21, 2016) (
2016-11-28T10:54:01-08:00The demand for penetration testing and security assessment services worldwide has been growing year-on-year. Driven largely by Governance, Risk, and Compliance (GRC) concerns, plus an evolving pressure to be observed taking information security and customer privacy seriously, most CIO/CSO/CISO's can expect to conduct regular "pentests" as a means of validating their organizations or product's security. An unfortunate circumstance of two decades of professional service oriented delivery of pentests is that the very term "penetration testing" now covers a broad range of security services and risk attributes — with most consulting firms provide a smorgasbord of differentiated service offerings — intermixing terms such as security assessment and pentest, and constructing hybrid testing methodologies. For those newly tasked with having to find and retain a team capable of delivering a pentest, the prospect of having to decipher the lingo and identify the right service is often daunting — as failure to get it right is not only financially costly but may also be career-ending if later proven to be inadequate. What does today's landscape of pentesting look like? All penetration testing methodologies and delivery approaches are designed to factor-in and illustrate a threat represented by an attack vector or exploitation. A key differentiator between many testing methodologies lies in whether the scope is to identify the presence of a vulnerability or to exploit and subsequently propagate an attack through that vulnerability. The former is generally bucketed in the assessment and audit taxonomy, while the latter is more commonly a definition for penetration testing (or an ethical hack). The penetration testing market and categorization of services is divided by two primary factors — the level of detail that will be provided by the client, and the range of "hacker" tools and techniques that will be allowed as part of the testing. Depending upon the business drivers behind the pentest (e.g. compliance, risk reduction, or attack simulation), there is often a graduated-scale of services. Some of the most common terms used are: Vulnerability Scanning – The use of automated tools to identify hosts, devices, infrastructure, services, applications, and code snippets that may be vulnerable to known attack vectors or have a history of security issues and vulnerabilities. Black-box Pentest – – The application of common attack tools and methodologies against a client-defined target or range of targets in which the pentester is tasked with identifying all the important security vulnerabilities and configuration failures of the scoped engagement. Typically, the penetration scope is limited to approved systems and windows of exploitation to minimize the potential for collateral damage. The client provides little information beyond the scope and expects the consultant to replicate the discovery and attack phases of an attacker who has zero insider knowledge of the environment. Gray-box Pentest – Identical methodology to the Black-box Pentest, but with some degree of insider knowledge transfer. When an important vulnerability is uncovered the consultant will typically liaise with the client to obtain additional "insider information" which can be used to either establish an appropriate risk classification for the vulnerability, or initiate a transfer of additional information about the host or the data it contains (that could likely be gained by successfully exploiting the vulnerability), without having to risk collateral damage or downtime during the testing phase. White-box Pentest (also referred to as Crystal-box Pentest) – Identical tools and methodology to the Black-box Pentest, but the consultants are supplied with all networking documentation and details ahead of time. Often, as part of[...]
2016-11-25T07:48:00-08:00Post-Thanksgiving is a time of reflection where we are thankful for technological improvements that allow us to succeed. Every-so-often, technology comes along that not only improves our business but can also help the world. Cloud computing is such a technology. Transitioning to the cloud is a good choice for just about any business, for several reasons. Cloud applications offer scalability, performance, cost-effectiveness and easy mobile access. However, if you question how your decision to make use of cloud-based software affects the environment, you're not the only one. The advantages that cloud-based solutions offer do in fact have implications for our planet. Cloud services differ from more "traditional" approaches by making a single large data center accessible to multiple businesses. When you consider the electrical needs of several smaller data centers, each with their own standalone cooling systems, though, it begins to make sense how these server arrays can be responsible for about two percent of electricity use in the United States. But is that a bad thing? Maybe not. Leading the Way Not surprisingly, companies that have invested billions making the cloud-based infrastructure we've come to know and love are deeply interested in their ability to prove the cloud's advantages — but they're also interested in how it all impacts the environment. Google, for one, has teamed up with Lawrence Berkeley National Laboratory to investigate the possibility of increasing efficiency, and the results are promising. Scientists speculate that in the near future, we could potentially power all of Los Angeles for a year with the kilowatts saved by moving common apps to the cloud. But where is all of this "wasted power" being used in our current configuration? Take small business, for example. If you run an office with 15 computers, and they stay on all the time, you're not even close to maximizing the energy those computers are using. A recent study revealed that the average computer in a small business setting like this is actually used less than 10 percent of the time it's drawing power. Now imagine moving all of the application processes from the computers in that office out to a data center. Yes, the servers are still on all the time, but a single server in a massive array can handle all of the tasks carried out in the office. During the time your office isn't online, another business can take advantage of that resource. It's a win based on sharing of resources. Counting Carbon Credits It's not just a matter of saving power, though. Huge reductions in carbon footprint can also be realized by migrating to a data-based infrastructure. The process is called dematerialization. Simply put, it means using fewer physical resources to accomplish the same level of productivity. Migrating to the cloud means fewer machines are used, which in turn reduces power costs and the subsequent cooling needs that require more power. Businesses don't have to be large for the gains to be significant, either. If you're an IT manager looking for that last little bit of ammunition in your argument to transition to the cloud, consider shedding some light on the environmental benefits of making the change. Carbon emissions are reduced in several ways by switching to the cloud. For example, large providers can use only the resources they need to accomplish a job, as compared to purchasing multiple computers that see just limited use. When new technologies allow providers to create more efficient data centers, the reduction in their carbon footprint is immediately amplified because of their global reach. Just take a look at this statistic: A Microsoft-led study on the impact of cloud infrastructure on carbon emissions revealed that businesses can save over 30 percent of their carbon emissions per user just by[...]
2016-11-23T11:56:01-08:00Global Internet Report 2016The economics of building trust online; preventing data breachesClick to Download ReportData breaches are the oil spills of the digital economy. Over 429 million people were affected by reported data breaches in 2015 — and that number is certain to grow even higher in 2016. These large-scale data breaches along with uncertainties about the use of our data, cybercrime, surveillance and other online threats are eroding trust on the Internet. This is why the 2016 edition of our Global Internet Report is dedicated to exploring data breaches, their impact on user trust and their consequences for the global digital economy. These consequences, not surprisingly, can be serious. The purpose of the report is not to emphasize the problem, but to offer solutions and to emphasize the important role that companies and organizations play in building a more trusted Internet. A key question raised by the report is: Why are organisations not taking all available steps to protect the personal information they collect from each of us? The report examines the issues and walks through a number of case studies that highlight the concerns. It ends with a series of five concrete recommendations for actions we need to take. This video provides a preview: width="644" height="362" src="https://www.youtube.com/embed/FxPRGDF-9iY?rel=0&showinfo=0" frameborder="0" allowfullscreen style="margin-bottom:15px;"> We ask you to please read the 2016 GIR, to share the report widely, and to take whatever actions you can to bring about a more trusted Internet. This issue of trust is so serious that we risk undoing all of the progress we have made over the past three decades. It is time we act together to solve it. A version of this post was originally published on the Internet Society blog. Written by Olaf Kolkman, Chief Internet Technology Officer (CITO), Internet SocietyFollow CircleID on TwitterMore under: Policy & Regulation, Privacy, Security, Web [...]
2016-11-21T20:00:00-08:00Hyderabad set a new record in terms of attendance with a total of 3,141 participants registered and 1,400 attendees identified as 'locals' from the region. It was also, theoretically at least, one of the longest ICANN meetings with seven days baked into the schedule. Unfortunately, the development of the schedule itself was the source of much community criticism throughout the meeting, resulting in a chunk of time devoted to the topic during the second Public Forum. Much of the disquiet was related to the opaque nature in which the schedule was developed without adequate consultation with the community. ICANN recently published a survey to respond to community concerns with the intent that planning for ICANN 58, to be held in Copenhagen in March 2017, will be done early and openly. We, as a community, must remember that our participation is critical to the success of the schedule and the overall meeting. In that context I would ask that before we start haggling over slots on the schedule and time with the Board etc., that we do so with the following in mind: how do we create a schedule that allows we, the community, to make the most effective use of our collective time while supporting ICANN's mission? For the last two years, the ICANN community has been absorbed by the IANA Transition. Now that most of that work is behind us, let's work together with the same dedication to develop a schedule for Copenhagen that allows us to progress substantive work efforts that have been languishing on the back-burner. During a joint session of the Registry and Registrar Stakeholder Groups (RySG and RrSG) there was an excellent discussion on finding ways to get more substantive work done during ICANN meetings. It was recognised that even though Hyderabad was a longer meeting, it seemed much less productive than the shorter, policy-focused meeting held earlier in the year in Helsinki. From a RySG and RrSG perspective, the substantive work that relates to ICANN's core business or mission is the GNSO policy development and subsequent implementation efforts. Unfortunately, despite the longer meeting format, minimal time was allocated on the schedule for face-to-face meetings of the four main policy development efforts currently being undertaken in the GNSO. The GNSO Council also had a good discussion with the Board and separately with the GAC about providing opportunities during ICANN meetings for PDP Working Groups and the GAC to have formal moderated exchanges on topics of mutual interest. Providing an opportunity for these exchanges could go a long way to minimising the risk of the Board receiving GAC advice that is inconsistent with PDP WG recommendations approved by the GNSO Council and having to find a solution. I appreciate that my view is largely GNSO centric, so it would also be beneficial to know what other substantive work efforts the ccNSO, ALAC and the GAC and others are focussed on so that we might better understand priorities and allocate time accordingly. The cost of bringing together the ICANN community three times a year in different locations across the globe is not insignificant to ICANN or to those that attend. It would be great if we could collectively become more efficient with our precious time once we've made the commitment to attend another ICANN meeting. Written by Donna Austin, Policy & Industry Affairs Manager at NeustarFollow CircleID on TwitterMore under: ICANN [...]
2016-11-21T11:50:00-08:00The Uniform Domain Name Dispute Resolution Policy is a non-exclusive arbitral proceeding (alternative to a statutory action under the Anticybersquatting Consumer Protection Act) implemented for trademark rights' owners to challenge domain names allegedly registered for unlawful purposes. Policy, paragraph 4(a) states that a registrant is "required to submit to a mandatory administrative proceeding in the event that a third-party ... asserts to the applicable Provider that (i) your domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights." Proof for registered trademarks is easy since the certificate of registration attached to the complaint establishes the right. However, for unregistered trademarks (common law, 1b applications in the U.S, supplemental register in the U.S., and state or regional as opposed to registrations in national registries) complainants must prove their rights by establishing secondary meaning of their marks, which means demonstrating presence in the marketplace predating registrations of the subject domain names. The ACPA language is more explicit and clearer than the UDRP; it requires the mark to be "distinctive at the time of the registration of the domain name." Both ICANN Panels and U.S. courts demand proof of secondary meaning, which is demonstrable by offering evidence of (1) the length and continuity of a mark's use, (2) sales, advertising, and promotional activities, (3) expenditures relating to promotion and marketing, (4) unsolicited media coverage, and (5) sales or admission figures. Since these are facts within a complainant's knowledge and control, to withhold proof (to be silent or evasive) supports an adverse inference. Complainants of less well-known alleged common law marks have heavier burdens. WORLD CLAIM, LAWYERS SERVICE, and SYMBOL OF LOVE (recent examples of alleged marks on the lower end of the scale discussed further below) fail on the standing issue. In contrast, DUKE (a trademark for Duke University) has a high past and present reputation so that Duke's burden for common law rights to a domain name incorporating the mark with a generic term is relatively light. Even if lesser known marks qualify for standing complainants may be stymied for insufficiency of reputation proof. An example is Record Connect, Inc. v. Chung Kit Lam / La-Fame Corporation, FA1609001693876 (Forum November 3, 2016). The determination depends on the weakness of complainant's prima facie case and the merit of respondent's rebuttal. The reason proof may be sufficient for standing but not for bad faith (in Record Connect Respondent didn't even contest the right) is explained by a mark's lack of reputation at the time the domain name was registered. The less reputation a complainant had in the past (even though its reputation may have improved and be significant in the present) the greater the likelihood respondent's plausible denial of knowledge will prevail on the merits. In Record Connect the Complainant claims to have common law rights in the RECORD CONNECT mark dating back to its first use in commerce in 2012 but, [t]o have common law rights, a complainant must demonstrate that a mark has acquired secondary meaning. Relevant evidence of secondary meaning can include sales figures, length of use of a mark, and expenditures in maintaining the mark. [Citing earlier authority] … Complainant provided the following evidence to demonstrate common law rights in the mark: A sworn affidavit indicating use of the RECORD CONNECT mark since 2012, with annual sales under the mark of approximately $2,440,000.00 and annual marketing expenses of about $36,000.00…. Although Complainant qualified the Panel found that it was "u[...]
2016-11-18T11:56:00-08:00If, like me and my clients, you ever receive an email about a domain name expiration, proceed with great suspicion — because many of these "notices" are a sham. They're designed to sell you services you don't need or to trick you into transferring your domain name to another registrar. Usually, the emails can safely be ignored. Here's an example: [...]
2016-11-17T07:42:00-08:00I've discussed the role of the Internet in creating and propagating lies in a previous post, noting that Donald Trump lied more frequently than Hillary Clinton or Bernie Sanders during the campaign. Now let's look at fake news like the claim that Pope Francis had endorsed Trump. The fake post features the following image and includes a "statement" by the Pope in which he explains his decision. The post evidently originated from the website of a fake news station, WTOE 5. Avarice, not politics, seems to be the motivation for the site since it is covered with ads and links to other "stories" that attack both Clinton and Trump. WTOE 5 states that it is a satirical site on their about page, but how many readers see that? Other sites do not claim satire. For example, the Christian Times about page says nothing about satire, but does assert that they are not responsible for any action taken by a reader: Christian Times Newspaper is your premier online source for news, commentary, opinion, and theories. Christian Times Newspaper does not take responsibility for any of our readers' actions that may result from reading our stories. We do our best to provide accurate, updated news and information. The Christian Times "editorial" policy is similar to that of WTOE 5 — they published pre-election news stories on thousands of dead people voting in Florida, hacking of voter systems by the Clinton campaign, Black Panthers patrolling election sites, etc. As soon as the election was over, they informed us that Hillary Clinton had filed for divorce. Don't believe it? Here is their evidence: Given the WTOE 5 claim to be satire or the Christian Times eschewing responsibility for actions taken by readers, I suspect that unless Pope Francis or Hillary Clinton sues, there is no legal recourse. The dirty tricks during this election remind me of the Watergate burglary, but, unlike Watergate, it is not clear that a law has been broken. In the Watergate case, a crime was committed and the burglars were convicted and sent to prison in 1973. In 1974 investigators were able to establish a White House connection to the burglary and, under threat of impeachment, President Nixon resigned. Would it be possible to establish a connection between a website like "WTOE 5 News" and the Trump campaign? A Whois query shows us that the domain name Wto5news.com was registered by DomainsByProxy.com. We can see the address, contact information and names of people at DomainsByProxy.com, but the identity of the person or organization registering the domain name is private. I also checked the Whois record for the Christian Times. It turns out that DomainsByProxy.com is also the registrar for Christiantimesnewspaper.com and the registration is also private. I am not a lawyer, but I suspect that a request for a subpoena to get the contact information of a long list of people registering domain names for misleading websites would be seen as a "fishing expedition" by the courts. I understand the wish to protect the privacy of a person or organization registering a domain name, but there is also a public interest in discouraging sites like Wto5news.com. A verifiable, real-names policy for domain registration would discourage this sort of thing. The WELL, an early community bulletin board system, adopted such a policy years ago. Their slogan is "own your own words” and it serves to keep the discussion civil, stop bullying and lying, etc. Trump supporters seem to worry a lot about voter fraud. They advocate easing mechanisms for challenging a voter's registration and encourage strict requirements for proof of identity and residence. There is more evidence of demonstrably fraudulent[...]
2016-11-15T20:39:00-08:00The victory of Trump left the world perplexed, it did not take long to appear texts blaming Facebook and its bubble for the unexpected result. The "bubble", a theme that circulated more for the academic and technical spheres gained an unusual popularity in the last days, never saw so many texts on the subject published in such a short space of time, and in the recognized spaces of global journalism. But after all, what are the bubbles and how can they decide on elections? First of all, you need to understand one thing: the bubble is not something that you "enter", much less collective; the bubble is personal and you build it around yourself with the help of algorithms. What are bubbles and algorithms? It is possible that you do not understand what a bubble or filter bubble is and even what an algorithm is, let alone use the two to build something around you, do not worry, let's explain. Let's start from the beginning, Eli Pariser, a cyber-activist who researched the subject, and wrote an excellent book "The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think", where he describes in a timely manner how the invisible filters appeared and gives them the name of "filter bubble", also known as "bubble". According to Pariser, one of technology's visionaries Nicholas Negroponte, an MIT researcher, back in 1994 thought of what he called "smart agents," which would be algorithms that work as content curators. Early experiences with "smart agents" were a disaster, but Jeff Bezos developed the concept of Amazon's "smart agent" based on the bookseller who knows you and recommends the books he believes would be interesting. In order to reach this goal, the algorithm started to record everything that happened on the site, who saw or bought a book, and which ones were also interesting, and quickly created a relationship logic able to present with reasonable margin of success reading suggestions based on their research , from where you access, and your purchase and search history, was created the first efficient bubble. Two people with identical devices accessing Google on the same network with the same keywords will never get the same results. Google has also adopted an algorithm with criteria that define the relevance and adherence of the search to what it believes you are looking for, based on countless variables that range from your search history to the location and device from where it is being made search. Two people with identical devices accessing the same network with the same keywords will never get the same results. The same goes for Netflix, Facebook and many other networks and services you access, which are always recording and comparing an increasing number of information about you, your relationships and interests to try to hit your desire efficiently. What happens with the Facebook timeline, which in practice is not chronologically ordered, but according to what Facebook's algorithm thinks is relevant to you is that you do not realize it until you pay attention to the details. Problems with bubbles Laziness is human nature, having an algorithm offering content and information tailored to you is almost a dream, but it can quickly become a nightmare. Bubbles are invisible The first problem with bubbles is that they are invisible, if you do not know they exist, you will hardly notice them, and often even conscious does not seem to notice. Bubbles are algorithms built to offer you content and information that they deem most appropriate for your interests. Bubbles are algorithms Bubbles are algorithms, mathematical models, based on logic, and for this reason not able to perceive subjective el[...]
2016-11-15T17:45:00-08:00The sharing economy is a challenge for local communities. On the good, it creates economic opportunity and reduces the price. On the bad, it circumvents public safety and welfare protection. Such is the clash between Airbnb and local jurisdictions. San Francisco implemented a local ordinance that permits short-term rentals on the condition that the rental property is registered. In order to register the property, the resident must provide proof of liability insurance and compliance with local code, usage reporting, tax payments, and a few other things. San Francisco then enacted another ordinance that makes it a misdemeanor crime to collect a booking fee for unregistered properties. Airbnb and Homeaway sued arguing that plaintiffs' businesses are protected by 47 U.S.C. § 230(c) of the Communications Decency Act (and some other arguments ignored here). EFF, CDT, The Internet Association and some other usual suspects intervened — this case is attracting lots of attention. AIRBNB, INC. v. City and County of San Francisco, Dist. Court, ND California 2016. Before delving into the application of the law to this case, let's review a few key facts. Airbnb is a website where property owners can list available rentals, and guests can arrange for accommodations. Airbnb does not own the properties in question. Airbnb makes its money by charging a service fee to the property owner and the guest. Sec. 230(c) protects interactive computer services from liability for third party content. Specifically, "[n–o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Plaintiffs sued, arguing that the San Francisco regulation is preempted by Sec. 230(c). Plaintiffs lost, and it's surprising because the plaintiff is here being held accountable for the actions of a third party. But listen to the rationale of the court, which reflects careful crafting by San Francisco. First, plaintiffs are not being held liable for a third party's speech. If a property owner publishes an advertisement for a property, that is perfectly fine. The property owner can do sue, and plaintiffs have no liability. Plaintiffs are not being called on to "monitor, edit, withdraw, or block" any listings. Hosting of those rental announcements, in and of itself, is not actionable. It's the next step that plaintiffs cannot do. Having received the content and hosted it, plaintiffs themselves cannot take the action of collecting a booking fee for an unregistered property. That is something the plaintiffs are doing, not the third party. The regulation is placing an obligation on the plaintiffs to confirm that the properties are registered. The compliance of plaintiffs is what is actionable, not the content of the property owners' listings. Plaintiffs and intervenors scramble and cite to the long litany of caselaw that establish that Sec. 230(c) provides broad immunity. Sec. 230(c) is the favorite go-to statute for establishing that online services cannot be held liable for what other people say. The problem, according to the Court, is that in each of those cited cases, the website is being held liable for its role as a publisher of third party content. There is something illegal, offensive, or problematic with the content, and the solution of the government, or the plaintiff who believed that he or she was being defamed, was to make the website liable for the third-party content. But that is exactly the type of liability Sec. 230(c) was enacted to prevent. Online services are interactive manifestations of the engagements of multiple so[...]
2016-11-13T13:24:00-08:00Tom Wheeler surprised us as head of the Federal Communication Commission — might Trump? In May 2013, President Obama picked Tom Wheeler to head the Federal Communication Commission. The Internet community generally disapproved because Wheeler had been a lobbyist for both the cellular and cable industries and a major contributor to the Obama campaign. Internet service providers AT&T and Comcast lauded the appointment and a few months later, the President was spotted playing golf with Brian Roberts, chief executive of Comcast. It looked like a Washington insider deal. But after looking at Wheeler's blog posts and his service on a Presidential commission, I speculated that Wheeler might be a "wolf in sheep's clothing" and, by August 2013, we had mounting evidence that Wheeler was, in fact, acting in the public interest, not that of the ISP industry. Now that Donald Trump has been elected President, the Internet community is understandably worried. There is speculation that Trump will reverse Wheeler's stance on network neutrality and he has chosen Jeffrey Eisenach, an (often paid) oponent of regulation as his telecommunication "point man." (You can see his testimony on net neutrality here). That seems consistent with Trump's promise to get rid of red tape and regulation and let big business do its thing, but using the words "Trump" and "consistent" in the same sentence is oxymoronic. He also promises to fight the elites in support of ordinary (white, Christian) people. That would seem to call for pro-competitive measures to weaken the grip of the Internet service giants. Tom Wheeler surprised industry insiders by supporting net neutrality, raising the speed used to define "broadband," fighting to curb state legislature power to stop municipal broadband, pushing for a standard TV-interface box combining the functions of today's set-top boxes and Internet interfaces, favoring sharing of Federal spectrum and scrutinizing transit Internet agreements. Will Donald Trump surprise us and work to make the American Internet Great Again? (I doubt it, but, if Trump can be elected president, anything is possible). Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Net Neutrality, Policy & Regulation, Telecom [...]
2016-11-12T11:52:00-08:00Three months ago, I pondered the question Would the Internet Survive a Trump Apocalypse? As improbable as that outcome was in August, enough of the American electorate has "pulled the pin" to bring it on. It is a brave new world — distinctly darker and more uncertain. At the moment, the Trump team is trying to figure out how to manifest their vacuous invectives masquerading as policy. The world is watching, and Washington looks like the scene in Ghostbusters where the containment grid has just been turned off, and the demonic ghosts are rising from the underbelly of K-Street. The result here is a Washington lobbying dream — a result rather different than that promised to naïve Trump devotees. There is no Reality TV script here. Some pundits credibly suggest that a U.S. President whose ideology is based singularly on Narcissism, will not survive. Germany — which has experienced its own nightmare with a demagogue — sees the U.S. election in dire terms adversely affecting the stature of the West broadly. Respected commentators have opined that "America died on Nov. 8." However, only a quarter of Americans actually pulled the voting lever for Trump. Indeed, Clinton beat him on votes. Now, shock is turning to all manner of actions to contain the damage done and fight back. Massive anti-Trump Washington rallies are planned on the day of the inauguration, and that is just the start. Like the old adage about the internet, people and institutions will route around malfunctions and adapt. The Trump Apocalypse is a major disruption that will create a new market for survival and resistance strategies — including internet institutions like ICANN. Disruptions of various kinds to internets and DNS have unfolded for the past forty years. Although nothing nearly this extreme has befallen U.S. politics, there are examples in the early 80s of reactionary political ideologues coming to power in Washington who tried to take over and significantly alter the role of government in internet-related developments. That was a very different world of technologies, providers, markets, and government roles. The pendulum swung hard in the direction of government facilitating platforms favored by those major companies with the ability to shape Washington institutions. Still, internets prevailed, and the DNS platform emerged. Atkinson's observations as to reversing the ICANN decision seem likely — especially as the Trump argument was based on pure political rhetoric and is unlikely to be supported by industry. However, what about other potential developments? The Internet as Paradigm is dead. All kinds of global and U.S. domestic politics come into play here. Policies like NetNeutrality, FCC expansion of Title II authority, and privacy requirements seem likely to be quickly undone. Globally, the notion of the internet as a mechanism of democratic regime change seems certain to cease, notwithstanding the irony that it helped Trump come into power in the U.S. New compliance obligations. New mandates associated with more traditional compliance obligations — particularly related to availability and critical infrastructure protection, emergency and public safety communication, national security and law enforcement investigatory, cybersecurity, and Identity Management will take their place. These needs are even more important today than they were 30 years ago. This shift has also actually occurred across many countries in the form of new legislation over the past two years. Government will look more to industry — at least industry with a signific[...]
2016-11-11T10:56:00-08:00Brands are an asset and have a value for organizations, as they generate revenue: Customers are happy to pay a premium for a brand they love. They will show a preference and be loyal to a brand they trust. There are various ways to estimate the value of a brand. An excellent point of reference is Interbrand, which for many years has followed the top 100 brands and publishes their value based on a combination of three factors: — Financial — Influence of brand — Strength of brand Apple, for instance, has an estimated value of USD 178 billion in 2016. DHL's brand value is estimated around USD 5.7 billion. The whole data and analysis can be found on Interbrand. The top 100 brands list is relatively stable, as 88 brands were already present four years ago. Dot brand Some brands have decided, in 2012, that their name should actually also become the global name for all of the digital assets, and registered their brand name as a Top-Level Domain. Google, for instance, located their new blog platform, where they post news and information, on the domain blog.google. That is a typical example of a branded top-level domain. Most valuable brands own their dot brand The most valuable brands have been applying for their brand names as a Top-Level Domain: — 48 of the top 100 brands applied — eight from the top 10 applied BrandValue 2016 (Million USD)Dot brandApple178,119YesGoogle133,252YesCoca-Cola73,102YesMicrosoft72,795YesToyota53,580YesIBM52,500YesSamsung51,808YesAmazon50,338YesMercedes-Benz43,490NoGE43,130Yes Some groups such as Microsoft, Apple or Google applied for more than one domain extension. In 2016, the cumulative value of the 48 brands participating in the dot-brand program was of USD 1,205 billion. The cumulative value of 52 brands that did not participate in the program, was of USD 590 billion. Brands owning their dot brand grow faster. Between 2012 and 2016, some brands have grown impressively. Facebook's brand value was nearly multiplied by 6. Amazon's brand value has nearly tripled. Amongst the 10 brands with the largest relative growth, 7 are dot-brand applicants: BrandGrowth 2012-2016TLD OwnerFacebook501%NoAmazon170%YesApple133%YesNissan123%YesHermès108%YesGoogle91%YesPorsche85%NoStarbucks84%NoToyota77%YesZara77%Yes Amongst the 10 brands with the largest absolute growth, 8 own their dot brand Top-Level Domain name. Disney is actually a bit special as they only applied for .ABC Top Level domain and not for their Disney brand name. BrandGrowth 2012-2016 (Million USD)Dot BrandApple101,551YesGoogle63,526YesAmazon31,713YesFacebook27,172NoToyota23,300YesSamsung18,915YesMicrosoft14,942YesMercedes-Benz13,393NoBMW12,483YesDisney11,352Yes The cumulative value of brands engaged in the dot-brand process grew globally more that the value of the other brands. [...]
2016-11-09T13:23:00-08:00Many commentators rushed into print when they heard that Craig Barratt, senior vice-president of Google's parent company Alphabet and CEO of Access (the unit of which Google Fiber is part), stated that he would quit the job and that Google would slow down or stop its fibre deployment. So, yes, obviously something is happening at Google; but at the same time, the company has a commitment to complete the fiber deployment projects it has already started and also to build the many new networks that have been announced over the last six months. So for the foreseeable future, it doesn't look as though that there is an end to their FttH activities — to the contrary, in all some 25 networks are under construction or are in the development stage. But certainly something is happening at the fiber end of the company, and this brings me back to analyses I have written about Google in the past— Back in 2010 I wrote: The company has a vested interest in making sure that the digital economy is developed and, like most others, it is frustrated by the extremely slow pace at which the telcos are upgrading their networks. They will do anything to nudge the process along, or to kick-start developments. I remain of the view that Google has no intention whatsoever of becoming a telco; that would not make any sense. So all those endless blog discussions (mainly in the USA) about what underlying business model Google will base its FttH model on, and what the cost per house will be to lay fiber, are utterly useless. The company will want to establish a business model for high-speed telecoms infrastructure. FttH will produce this model along the lines of the trans-sector synergy that this will create, as we have been discussing in various BuddeComm reports. Many telcos insist that there is no business model for this, but Google is now placing its resources behind such investments, to show how economically viable business cases can be developed. Their projects can become demonstration sites that are able to be replicated elsewhere. It looks as though this analysis still holds today, more than 6 years later. But the obvious question that can now be asked is has that goal been achieved? This, of course, is debatable. Most certainly the telcos have been pushed into action, and announcements have been made by AT&T and Verizon indicating that they are taking FttH more seriously. Late last year AT&T announced the rollout of FttH to 11.7 million homes, and earlier this year Verizon restarted its FttH rollout with the announcement of a $300 million FttH network for Boston. So, while this is still a long way from being a national network, things are moving. Furthermore, an increasing number of municipalities are involved in securing FttH networks for their cities. And so, yes, perhaps Google's job is at least partly done. It is also important to realize that, as I have also stated on many occasions, it doesn't make sense anywhere in the world to build competing FttH networks. There simply is no economic model for such an approach. Indications are that in competing markets Google doesn't get more than a 35%–40% market penetration and it is questionable whether that is enough for a sustainable FttH business model. But in my analysis of Google, this is not too critical an issue for them, as I conclude that their main interest is not becoming a large scale infrastructure company but stimulating the deployment of high-speed broadband. So, in the end, in my view, it was never Google's intention to become a serio[...]