Subscribe: CircleID: Featured Blogs
http://www.circleid.com/rss/rss_comm/
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
access  amazon  content  domain  domains  future  google  icann  internet  ipv  long  network  problem  time  udrp  years 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID: Featured Blogs

CircleID: Featured Blogs



Latest blogs postings on CircleID



Updated: 2017-09-25T20:15:00-08:00

 



The Role of Domain Name Privacy and Proxy Services in URS Disputes

2017-09-25T13:15:00-08:00

Here's another apparent limitation of the Uniform Rapid Suspension System (URS), the domain name dispute policy that applies to the new generic top-level domains (gTLDS): Proceedings are unlikely to unmask cybersquatters hiding behind privacy or proxy services. Domain name registrants often use these privacy and proxy services to hide their identities when they register domain names. The services have legitimate uses but are controversial. In proceedings under the Uniform Domain Name Dispute Resolution Policy (UDRP), the privacy veil is often lifted after a complaint has been filed, allowing a trademark owner to learn the identity of the so-called underlying registrant. Doing so can be beneficial to a trademark owner complainant, creating leverage and possibly leading to further evidence of bad faith or links to additional domain names. At WIPO (the leading provider of UDRP services), a complainant is typically offered an opportunity to amend a complaint after the underlying registrant has been identified during the administrative compliance phase. Here's what WIPO's Overview 3.0 says (in part) on the topic: When provided with underlying registrant information which differs from the respondent named in the complaint, a complainant may either add the disclosed underlying registrant as a co-respondent, or replace the originally named privacy or proxy service with the disclosed underlying registrant. In either event, complainants may also amend or supplement certain substantive aspects of the complaint (notably the second and third elements) in function of any such disclosure. However, the URS — a quicker process that is "not intended for use in any proceedings with open questions of fact, but only clear cases of trademark abuse" — does not provide for such amendments or supplements to a complaint. Indeed, the Forum (the leading provider of URS services) has a supplemental rule that expressly says: "The Complaint may not be amended at any time." As a result, a review of URS cases shows that many identify the respondent only as a privacy or proxy service, such as the popular Domains By Proxy, because the underlying registrant is never disclosed during the course of a URS proceeding. Had the trademark owner elected instead to file a UDRP complaint for the same domain name (which is usually always an option, given that all new gTLDs are subject to both the URS as well as the UDRP), then the record might have identified the underlying registrant rather than the privacy or proxy service. Of course, the URS continues to offer some advantages over the UDRP (notably quicker, less expensive resolutions), but the URS has long been criticized for its shortcomings (such as its ability only to suspend, not transfer, a disputed domain name). Now, it seems that the URS has yet another shortcoming that trademark owners should consider when deciding whether to file a URS or UDRP complaint: If learning a hidden registrant's true identity is important, then a UDRP proceeding might be a better option than the URS. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Domain Names, Law, Policy & Regulation, Privacy, Top-Level Domains [...]



Catalan Government Claims Spanish Online Censorship Breaching EU Laws

2017-09-24T07:50:00-08:00

The Catalan government has written to the European Commission claiming that the Spanish government is in breach of EU law. In a letter from Jordi Puigneró Secretary of Telecommunications, Cybersecurity and the Digital Society at the Government of Catalonia addressed to Andrus Ansip, European Commissioner for Digital Economy and Society, the Catalan government calls out the moves by the Madrid government as censorship. Over the past ten days, the Spanish government has issued court orders to multiple entities including the .cat domain name registry, whose offices were also raided, as well as to Spanish ISPs. The goal being to block access to websites and other content related to the upcoming referendum in Catalonia. The letter, refers to the court order the .cat registry received, which demanded that they block all .cat domain names that "could be about or point to any content related to the referendum". It also cites the worldwide media coverage of the raid on the .cat offices and the blocking of multiple websites (and domains) related to the referendum. Apparently, the court orders being issued to the ISPs in Spain are very broad, as the letter refers to orders blocking access to "all websites publicised by any member of the Catalan government in any social network that has a direct or indirect relation with the referendum without any further court order". How ISPs are meant to implement that kind of court order is beyond me, as it sounds incredibly vague and the judicial equivalent of using a sledgehammer to crack a walnut. Whether the European Commission will make any public comments in reaction to this letter or not is debatable, but the concerns being raised by Jordi Puigneró are ones that are shared by many observers from around the globe. The Spanish government's actions in Catalonia have received widespread criticism from many in civil society including ISOC and the EFF. Written by Michele Neylon, MD of Blacknight SolutionsFollow CircleID on TwitterMore under: Censorship, Internet Governance, Policy & Regulation, Registry Services, Top-Level Domains [...]



What Does the Future Hold for the Internet?

2017-09-22T14:38:00-08:00

Explore the interactive 2017 Global Internet Report: Paths to Our Digital FutureThis is the fundamental question that the Internet Society is posing through the report just launched today, our 2017 Global Internet Report: Paths to Our Digital Future. The report is a window into the diverse views and perspectives of a global community that cares deeply about how the Internet will evolve and impact humanity over the next 5-7 years. We couldn't know what we would find when we embarked on the journey to map what stakeholders believe could shape the future of the Internet, nor can we truly know what will happen to the Internet, but we do now have a sense of what we need to think about today to help shape the Internet of tomorrow. The report reflects the views and aspirations of our community as well as some of the most pressing challenges facing the future of this great innovation. What have we learned? We've learned that our community remains confident that the core Internet values that gave rise to the Internet remain valid. We also heard very strong worries that the user-centric model of the Internet is under extraordinary pressure from governments, from technology giants, and even from the technology itself. There is a sense that there are forces beyond the users' control that may define the Internet's future. That the user may no longer be at the center of the Internet's path. It is, perhaps, trite to say that the world is more connected today than ever before. Indeed, we are only beginning to understand the implications of a hyperconnected society that is dependent on the generation, collection and movement of data in ways that many do not fully understand. The Internet of the future will most certainly enable a host of products and services that could revolutionize our daily lives. At the same time, our dependence on the technology raises a myriad of challenges that society may be ill-equipped to address. Clearly, the Internet is increasingly intertwined with a geopolitical environment that feels uncertain and even precarious. The Internet provides governments with both opportunities to better the lives of their people but also tools for surveillance and even control. This report highlights the serious choices we all must make about how to ensure that rights and freedoms prevail in the Internet of the future. The decisions we make will determine whether humanity remains in the drivers' seat of technology or not. In short, the decisions we make about the Internet can no longer be seen as "separate", as "over there" — the implications of a globally interconnected world will be felt by all of us. And the decisions we make about the Internet will be felt far and wide. We are still just beginning to understand the implications of a globally connected society and what it will mean for individuals, business, government and society at large. How we address the opportunities and challenges that today's forces of change are creating for the future is paramount, but one thing above all others is certain — the choices are ours alone to make, and the future we want is up to us to shape. Explore the interactive 2017 Global Internet Report: Paths to Our Digital Future Written by Sally Shipman Wentworth, VP of Global Policy Development, Internet SocietyFollow CircleID on TwitterMore under: Broadband, Censorship, Cybersecurity, Internet Governance, Internet Protocol, Mobile Internet, Networks, Policy & Regulation, Privacy, Web [...]



Google Global Cache Servers Go Online in Cuba, But App Engine Blocked

2017-09-22T11:28:00-08:00

I had hoped to get more information before publishing this post, but difficult Internet access in Cuba and now the hurricane got in the way — better late than never. Cuban requests for Google services are being routed to GCC servers in Cuba, and all Google services that are available in Cuba are being cached — not just YouTube. That will cut latency significantly, but Cuban data rates remain painfully slow. My guess is that Cubans will notice the improved performance in interactive applications, but maybe not perceive much of a change when watching a streaming video. Note the italics in the above paragraph — evidently, Google blocks access to their App Engine hosting and application development platform. Cuban developers cannot build App Engine applications, and Cubans cannot access applications like the Khan Academy or Google's G-Suite. [...]



Networks Are Not Cars Nor Cell Phones

2017-09-21T09:24:00-08:00

The network engineering world has long emphasized the longevity of the hardware we buy; I have sat through many vendor presentations where the salesman says "this feature set makes our product future proof! You can buy with confidence knowing this product will not need to be replaced for another ten years..." Over at the Networking Nerd, Tom has an article posted supporting this view of networking equipment, entitled Network Longevity: Think Car, not iPhone. It seems, to me, that these concepts of longevity have the entire situation precisely backward. These ideas of "car length longevity" and "future proof hardware" are looking at the network from the perspective of an appliance, rather than from the perspective as a set of services. Let me put this in a little bit of context by considering two specific examples. In terms of cars, I have owned four in the last 31 years. I owned a Jeep Wrangler for 13 years, a second Jeep Wrangler for eight years, and a third Jeep Wrangler for nine years. I have recently switched to a Jeep Cherokee, which I've just about reached my first year driving. What if I bought network equipment like I buy cars? What sort of router was available nine years ago? That is 2008. I was still working at Cisco, and my lab, if I remember right, was made up of 7200's and 2600's. Younger engineers probably look at those model numbers and see completely different equipment than what I actually had; I doubt many readers of this blog ever deployed 7200's of the kind I had in my lab in their networks. Do I really want to run a network today on 9-year-old hardware? I don't see how the answer to that question can be "yes." Why? First, do you really know what hardware capacity you will need in ten years? Really? I doubt your business leaders can tell you what products they will be creating in ten years beyond a general description, nor can they tell you how large the company will be, who their competitors will be, or what shifts might occur in the competitive landscape. Hardware vendors try to get around this by building big chassis boxes and selling blades that will slide into them. But does this model really work? The Cisco 7500 was the current chassis box 9 years ago, I think — even if you could get blades for it today, would it meet your needs? Would you really want to pay the power and cooling for an old 7500 for 9 years because you didn't know if you would need one or seven slots nine years ago? Building a hardware platform for ten years of service in a world where two years is too far to predict is like rearranging the chairs on the Titanic. It's entertaining, perhaps, but it's pretty pointless entertainment. Second, why are we not taking the lessons of the compute and storage worlds into our thinking, and learning to scale out, rather than scaling up? We treat our routers like the server folks of yore — add another blade slot and make it go faster. Scale up makes your network do this — Do you see those grey areas? They are costing you money. Do you enjoy defenestrating money? These are symptoms of looking at the network as a bunch of wires and appliances, as hardware with a little side of software thrown in. What about the software? Well, it may be hard to believe, but pretty much every commercial operating system available for routers today is an updated version of software that was available ten years ago. Some, in fact, are more than twenty years old. We don't tend to see this because we deploy routers and switches as appliances, which means we treat the software as just another form of hardware. We might deploy ten to fifteen different operating systems in our network without thinking about it — something we would never do in our data centers, or on our desktop computers. So what this appliance-based way of looking at things emphasizes is this: buy enough hardware to last you ten years, and treat the software a fungible — software is a sec[...]



The Madness of Broadband Speed Tests

2017-09-19T10:55:00-08:00

The broadband industry has falsely sold its customers on "speed", so unsurprisingly "speed tests" have become an insane and destructive benchmark. As a child, I would go to bed, and sometimes the garage door would swing open before I went to sleep. My father had come home early from the late shift, where he was a Licensed Aircraft Maintenance Engineer for British Airways. I would wait for him eagerly, and he would come upstairs, still smelling of kerosene and Swarfega, With me lying in bed, he would tell me tales of his work, and stories about the world. Just don't break the wings off as you board!Funnily enough, he never told me about British Airways breaking the wings off its aircraft. You see, he was involved in major maintenance checks on Boeing 747s. He joined BOAC in 1970 and stayed with the company for 34 years until retirement. Not once did he even hint at any desire for destructive testing for aircraft. Now, when a manufacturer makes a brand new airplane type, it does test them to destruction. Here's a picture I shamelessly nicked showing the Airbus A350 wing flex test. I can assure you, they don't do this in the British Airways hangars TBJ and TBK at Hatton Cross maintenance base at Heathrow. Instead, they have non-destructive testing using ultrasound and X-rays to look for cracks and defects. So what's this all got to do with broadband? Well, we're doing the equivalent of asking the customers to break the wings off every time they board. And even worse, our own engineers have adopted destructive testing over non-destructive testing! Because marketing departments at ISPs refuse to define what experience that actually intends to deliver (and what is unreasonable to expect), the network engineers are left with a single and simple marketing requirement: "make it better than it was". When you probe them on what this means, they shrug and tell you "well, we're selling all our products on peak speed, so we try to make the speed tests better". This, my friends, is bonkers. The first problem is that the end users are conducting a denial-of-service attack on themselves and their neighbours. A speed test deliberately saturates the network, placing it under maximum possible stress. The second problem is that ISPs themselves have adopted speed tests internally, so they are driving mad levels of cost carrying useless traffic designed to over-stress their network elements. Then to top it all, regulators are encouraging speed tests as a key metric, deploying huge numbers of boxes hammering the broadband infrastructure even in its most fragile peak hour. The proportion of traffic coming from speed tests is non-trivial. So what's the alternative? Easy! Instead of destructive testing, do non-destructive testing. We know how to X-ray a network, and the results are rather revealing. If you use the right metrics, you can also model the performance limits of any application from the measurements you take. Even a speed test! So you don't need to snap the wings off your broadband service every time you use it after all. I think I'll tell my daughters at their next bedtime. It's good life guidance. Although I can imagine my 14 year old dismissing it as another embarrassing fatherly gesture and uninteresting piece of parental advice. Sometimes it takes a while to appreciate our inherited wisdom. Written by Martin Geddes, Founder, Martin Geddes Consulting LtdFollow CircleID on TwitterMore under: Access Providers, Broadband, Telecom [...]



Preliminary Thoughts on the Equifax Hack

2017-09-17T10:08:00-08:00

As you've undoubtedly heard, the Equifax credit reporting agency was hit by a major attack, exposing the personal data of 143 million Americans and many more people in other countries. There's been a lot of discussion of liability; as of a few days ago, at least 25 lawsuits had been filed, with the state of Massachusetts preparing its own suit. It's certainly too soon to draw any firm conclusions about who, if anyone, is at fault — we need more information, which may not be available until discovery during a lawsuit — but there are a number of interesting things we can glean from Equifax's latest statement. First and foremost, the attackers exploited a known bug in the open source Apache Struts package. A patch was available on March 6. Equifax says that their "Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." The obvious question is why this particular system was not patched. One possible answer is, of course, that patching is hard. Were they trying? What does "took efforts to identify and to patch" mean? Were the assorted development groups actively installing the patch and testing the resulting system? It turns out that this fix is difficult to install: You then have to hope that nothing is broken. If you're using Struts 2.3.5 then in theory Struts 2.3.32 won't break anything. In theory it's just bug fixes and security updates, because the major.minor version is unchanged. In theory. In practice, I think any developer going from 2.3.5 to 2.3.32 without a QA cycle is very brave, or very foolhardy, or some combination of the two. Sure, you'll have your unit tests (maybe), but you'll probably need to deploy into your QA environment and do some kind of integration testing too. That's assuming, of course, that you have a compatible QA environment within which you can deploy your old, possibly abandoned application. Were they trying hard enough, i.e., devoting enough resources to the problem? Ascertaining liability here — moral and/or legal — can't be done without seeing the email traffic between the security organization and the relevant development groups; you'd also have to see the activity logs (code changes, test runs, etc.) of these groups. Furthermore, if problems were found during testing, it might take quite a while to correct the code, especially if there were many Struts apps that needed to be fixed. As hard as patching and testing are, though, when there are active exploitations going on you have to take the risk and patch immediately. That was the case with this vulnerability. Did the Security group know about the active attacks or not? If they didn't, they probably aren't paying enough attention to important information sources. Again, this is information we're only likely to learn through discovery. If they did know, why didn't they order a flash-patch? Did they even know which systems were vulnerable? Put another way, did they have access to a comprehensive database of hardware and software systems in the company? They need one — there are all sorts of other things you can't do easily without such a database. Companies that don't invest up front in their IT infrastructure will hurt in many other ways, too. Equifax has a market capitalization of more than $17 billion; they don't really have an excuse for not running a good IT shop. It may be, of course, that Equifax knew all of that and still chose to leave the vulnerable servers up. Why? Apparently, the vulnerable machine was their "U.S. online dispute portal". I'm pretty certain that they're required by law to have a dispute mechanism, and while it probably doesn't have to be a website (and some people suggest that complainants shouldn't use it anyway), it's almost certainly a much cheaper way to receive disputes than is paper mail. T[...]



In Response to 'Networking Vendors Are Only Good for the Free Lunch'

2017-09-14T15:39:00-08:00

I ran into an article over at the Register this week which painted the entire networking industry, from vendors to standards bodies, with a rather broad brush. While there are true bits and pieces in the piece, some balance seems to be in order. The article recaps a presentation by Peyton Koran at Electronic Arts (I suspect the Register spiced things up a little for effect); the line of argument seems to run something like this — Vendors are only paying attention to larger customers, and/or a large group of customers asking for the same thing; if you are not in either group, then you get no service from any vendor Vendors further bake secret sauce into their hardware, making it impossible to get what you want from your network without buying from them Standards bodies are too slow, and hence useless People are working around this, and getting to the inter-operable networks they really want, by moving to the cloud There is another way: just treat your networking gear like servers, and write your own protocols--after all you probably already have programmers on staff who know how to do this Let's think about these a little more deeply. Vendors only pay attention to big customers and/or big markets. – Ummm… Yes. I do not know of any company that does anything different here, including the Register itself. If you can find a company that actually seeks the smallest market, please tell me about them, so I can avoid their products, as they are very likely to go out of business in the near future. So this is true, but it is just a part of the real world. Vendors bake secret sauce into their hardware to increase their profits. – Well, again… Yes. And how is any game vendor any different, for instance? Or what about an online shop that sells content? Okay, next. Standards bodies are too slow, and hence useless. – Whenever I hear this complaint, I wonder if the person making the complaint has actually ever built a real live running system, or a real live deployed standard that provides interoperability across a lot of different vendors, open source projects, etc. Yes, it often seems silly how long it takes for the IETF to ratify something as a standard. But have you ever considered how many times things are widely implemented and deployed before there is a standard? Have you ever really looked at the way standards bodies work to understand that there are many different kinds of standards, each of which with a different meaning, and that not everything needs to be the absolute tip top rung on the standards ladder to be useful? Have you ever asked how long it takes to build anything large and complicated? I guess we could say the entire open source community is slow and useless because it took many years for even the Linux operating system to be widely deployed, and to solve a lot of problems. Look, I know the IETF is slow. And I know the IETF has a lot more politics than it should. I live both of those things. But I also know the fastest answer is not always the right answer, and throwing away decades of experience in designing protocols that actually work is a pretty dumb idea — unless you really just want to reinvent the wheel every time you need to build a car. In the next couple of sentences, we suddenly find that someone needs to call out the contradiction police, replete in their bright yellow suits and funny hats. Because now it seems people want inter-operable networks without standards bodies! Let make a simple point here many people just do not seem to realize: You cannot have interoperability across multiple vendors and multiple open source projects, without some forum where they can all discuss the best way to do something, and find enough common ground to make their various products inter-operate. I hate to break the news to you, but that forum is called a standards body. [...]



Abusive and Malicious Registrations of Domain Names

2017-09-14T07:43:00-08:00

When ICANN implemented the Uniform Domain Name Dispute Resolution Policy (UDRP) in 1999, it explained its purpose as combating "abusive registrations" of domain names which it defined as registrations "made with bad-faith intent to profit commercially from others' trademarks (e.g., cybersquatting and cyberpiracy)." (The full statement can be found in the Second Staff Report on Implementation Documents for the Uniform Dispute Resolution Policy, Paragraph 4.1(c)). Bad actors employ a palette of stratagems, such as combining marks with generic qualifiers, truncating or varying marks or by removing, reversing, and rearranging letters within the second level domain (typosquatting). They are costly to police and likelier even more costly to maintain forfeited domain names, but for all the pain they inflict they are essentially plain vanilla irritants. While these kinds of disputes essentially dominate the UDRP docket, there has been an increase in the number of disputes involving malicious registrations. The first instances of "phishing" and "spoofing" appear in a 2005 case, CareerBuilder, LLC v. Stephen Baker, D2005-0251 (WIPO May 6, 2005) in which the Panel found that the "disputed domain name is being used as part of a phishing attack (i.e., using 'spoofed' e-mails and a fraudulent website designed to fool recipients into divulging personal financial data such as credit card numbers, account usernames and passwords, social security numbers, etc.") The quainter forms of abuse are registrants looking to pluck lower hanging fruit. They are so obviously opportunistic respondents don't even bother to appear (they also don't appear with the malicious cases, but for another reason, to avoid identity). The plain vanilla type is represented by such cases as Guess? IP Holder L.P. and Guess? Inc. v. Domain Admin: Damon Nelson — Manager, Quantec LLC, Novo Point LLC, D2017-1350 (WIPO August 24, 2017) () in which Complainant's product line includes "accessories." In these types of cases, respondents are essentially looking for visitors. In contrast, malicious registrations are of the kind described, for example, in Google Inc. v. 1&1 Internet Limited, FA1708001742725 (Forum August 31, 2017) ( in which respondent used the complainant's mark and logo on a resolving website containing offers for technical support and password recovery services, and soliciting Internet users' personal information). . . . Complainant's exhibit 11 displays a malware message displayed on the webpage, which Complainant claims indicates fraudulent conduct. Malicious registrations are a step up in that they introduce a new, more disturbing, and even criminal element into the cyber marketplace. Respondents are not just looking for visitors, they are targeting brands for victims. Their bad faith is more than "profit[ing] commercially from others' trademarks" but operating websites (or using e-mails) as trojan horses. It aligns registrations actionable under the UDRP with conduct policed and prosecuted by governments. The UDRP, then, is not just a "rights protection mechanism." The term "abusive registration" has enlarged in meaning (and, thus, in jurisdiction) to include malicious conduct generally. Total security is a pipe dream. ICANN has working groups devoted to mapping the problem, and there are analytical studies assessing its extent in legacy and new TLDs. Some idea of the magnitude is seen in "Statistical Analysis of DNS Abuse in gTLDs Final Report” commissioned by an ICANN mandated review team, the Competition, Consumer Trust and Consumer Choice Review Team (CCTRT). Incidents of abusive and malicious activity online and radiating out to affect the public offline represent the universe of cyber crime and uncivil behavior of which UDRP disputes play a minor, al[...]



Can Constellations of Internet Routing Satellites Compete With Long-Distance Terrestrial Cables?

2017-09-13T14:16:00-08:00

The goal will be to have the majority of long distance traffic go over this network. —Elon Musk Three companies, SpaceX, OneWeb, and Boeing are working on constellations of low-Earth orbiting satellites to provide Internet connectivity. While all three may be thinking of competing with long, terrestrial cables, SpaceX CEO Elon Musk said "the goal will be to have the majority of long-distance traffic go over this (satellite) network" at the opening of SpaceX's Seattle office in 2015 (video below). SpaceX orbital path schematic, sourceCan he pull that off? Their first constellation will consist of 4,425 satellites operating in 83 orbital planes at altitudes ranging from 1,110 to 1,325 km. They plan to launch a prototype satellite before the end of this year and a second one during the early months of 2018. They will start launching operational satellites in 2019 and will complete the first constellation by 2024. The satellites will use radios to communicate with ground stations, but links between the satellites will be optical. At an altitude of 1,110 kilometers, the distance to the horizon is 3,923 kilometers. That says each satellite will have a line-of-sight view of all other satellites that are within 7,846 kilometers, forming an immense mesh network. Terrestrial networks are not so richly interconnected and cables must zig-zag around continents and islands if undersea and other obstructions if under ground. Latency in a super-mesh of long, straight-line links should be much lower than with terrestrial cable. Additionally, Musk says the speed of light in a vacuum is 40-50 percent faster than in a cable, cutting latency further. Let's look at an example. I traced the route from my home in Los Angeles to the University of Magallanes in Punta Arenas at the southern tip of Chile. As shown here, the terrestrial route was 14 hops and the theoretical satellite link only five hops. (The figure is drawn roughly to scale). So, we have 5 low-latency links versus 14 higher-latency links. The gap may close somewhat as cable technology improves, but it seems that Musk may be onto something. Check out the following video of the speech Musk gave at the opening of SpaceX's Seattle office. His comments about the long-distance connections discussed here come at the three-minute mark, but I'd advise you to watch the entire 26-minute speech: style="margin-bottom:15px;" width="644" height="362" src="https://www.youtube.com/embed/AHeZHyOnsm4?rel=0" frameborder="0" allowfullscreen> Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers, Broadband, Telecom, Wireless [...]



Innovative Solutions for Farming Emerge at the Apps for Ag Hackathon

2017-09-13T09:16:00-08:00

Too often, people consider themselves passive consumers of the Internet. The apps and websites we visit are made by people with technical expertise using languages we don't understand. It's hard to know how to plug in, even if you have a great idea to contribute. One solution for this problem is the hackathon. Entering the Hackathon Arena For the uninitiated, a hackathon is a place of hyper-productivity. A group of people converge for a set period of time, generally a weekend to build solutions to specific problems. Often, the hackathon has an overall goal, like the Sacramento Apps for Ag hackathon. "The Apps for Ag Hackathon was created to bring farmers, technologists, students and others from the agriculture and technology industries together in a vibrant, focused environment to create the seeds of new solutions for farmers using technology," says Gabriel Youtsey, Chief Innovation Officer, Agriculture and Natural Resources. Now in its fourth year, the hackathon was bigger than ever and was held at The Urban Hive in Sacramento, with the pitch presentations taking place during the California State Fair. The event kicked off on Friday evening, with perspectives from a farmer on the challenges for agriculture in California, including labor, water supply, food safety, and pests, and how technology can help solve them. Hackathon participants also had opportunities to get up and talk about their own ideas for apps or other technology-related concepts to solve food and agriculture problems for farmers. From there, teams freely formed based on people's skills and inclinations. Although the hackathon is competitive, there is a great deal of collaboration happening, as people hash out ideas together. The hackathon itself provides tools and direction, and experts provide valuable advice and mentorship. At the end of the event, the teams presented working models of their apps and a slide deck to describe the business plan. Judges then decided who got to go home with the prizes, which often include support like office space, cash, and cloud dollars so that developers can keep building their software. For Entrepreneurs, Newbies, and Techies Alike In late July of this year, three people with very different career backgrounds entered the Apps for Ag Hackathon to dedicate their weekend to building a piece of software. They all walked away with a top prize and a renewed commitment to reimagining how technology can contribute to agriculture and food production. In the room was Sreejumon Kundilepurayil, a hackathon veteran who has worked for tech giants building mobile and software solutions, Scott Kirkland, a UC Davis software developer and gardener, and Heather Lee, a self-described generalist in business and agritourist enthusiast. "I was terrified," Lee shared. "I'm tech capable — I've taken some coding classes — but I had no idea what my role would be. I decided to go and put myself in an uncomfortable position. When I got there, I realized that telling a story was my role." While her team members were mapping out the API and back-end development, Lee was working on the copy, graphics, video, and brand guide. Her idea for a mobile app that connects farmers and tourists for unique day-trips to farms ended up winning third place. First place went to Kundilepurayul and Vidya Kannoly for an app called Dr Green, which will help gardeners and farmers diagnose plant diseases using artificial intelligence and machine learning. Initially built for the Californian market, it will eventually be available globally as the machine gets more and more adept at identifying plants and problems. Through their phone, growers will also have access to a messaging feature to ask questions and get advice. The first place winners! The benefits (and limit[...]



Amazon's Letter to ICANN Board: It's Time to Approve Our Applications for .AMAZON TLDs

2017-09-12T14:54:00-08:00

When ICANN launched the new gTLD program five years ago, Amazon eagerly joined the process, applying for .AMAZON and its Chinese and Japanese translations, among many others. Our mission was — and is — simple and singular: We want to innovate on behalf of our customers through the DNS. ICANN evaluated our applications according to the community-developed Applicant Guidebook in 2012; they achieved perfect scores. Importantly, ICANN's Geographic Names Panel determined that "AMAZON" is not a geographic name that is prohibited or one that requires governmental approval. We sincerely appreciate the care with which ICANN itself made these determinations, and are hopeful that a full approval of our applications is forthcoming. In a letter we sent to the ICANN Board on September 7, 2017 (the full text of which may be found below), we laid out the reasons for why our applications should be swiftly approved now that an Independent Review Process (IRP) panel found in our favor. Our letter highlights the proactive engagement we attempted with the governments of the Amazonia region over a five year period to alleviate any concerns about using .AMAZON for our business purposes. First, we have worked to ensure that the governments of Brazil and Peru understand we will not use the TLDs in a confusing manner. We proposed to support a future gTLD to represent the region using the geographic terms of the regions, including .AMAZONIA, .AMAZONICA or .AMAZONAS. We also offered to reserve for the relevant governments certain domain names that could cause confusion or touch on national sensitivities. During the course of numerous formal and informal engagements, we repeatedly expressed our interest in finding an agreed-upon outcome. And while the governments have declined these offers, we stand by our binding commitment from our July 4, 2013 Public Interest Commitment (PIC) to the .AMAZON applications, which stated that we will limit registration of culturally sensitive terms — engaging in regular conversations with the relevant governments to identify these terms — and formalizing the fact that we will not object to any future applications of .AMAZONAS, .AMAZONIA and .AMAZONICA. We continue to believe it is possible to use .AMAZON for our business purposes while respecting the people, culture, history, and ecology of the Amazonia region. We appreciate the ICANN Board's careful deliberation of our applications and the IRP decision. But as our letter states, approval of our .AMAZON applications by the ICANN Board is the only decision that is consistent with the bottom-up, multistakeholder rules that govern ICANN and the new gTLD program. We urge the ICANN Board to now approve our applications. An ICANN accountable to the global multistakeholder community must do no less. The full text of our letter is below. * * * Dear Chairman Crocker and Members of the ICANN Board of Directors: We write as the ICANN Board considers the July 10, 2017 Final Declaration of the Independent Review Process Panel (IRP) in Amazon EU S.à.r.l. v. ICANN regarding the .AMAZON Applications. Because the Panel concluded that the Board acted in a manner inconsistent with its Bylaws, we ask the Board to immediately approve our long-pending .AMAZON Applications. Such action is necessary because there is no sovereign right under international or national law to the name "Amazon," because there are no well-founded and substantiated public policy reasons to block our Applications, because we are committed to using the TLDs in a respectful manner, and because the Board should respect the IRP accountability mechanism. First, the Board should recognize that the IRP Panel carefully examined the legal and public policy reasons offered by the ob[...]



CE Router Certification Opens Up the Last Mile to IPv6 Fixed-Line

2017-09-12T08:08:00-08:00

With reference to IPv6, probably most end users might not have any sense of it. The mainstream parlance in the industry is that network carriers and content and service providers stick to their own arguments. Carriers believe owing to the lack of IPv6 content and service, the demand for IPv6 from the users is very small. The content and service providers hold that users cannot have access to content and service through IPv6 and that why they should provide the service in this background. Dr. Song Linjian of CFIEC stated in the article China, towards fully-connected IPv6 networks that Chicken and Egg paradox between IPv6 networks and content is just temporary and that it surely exists but not the key reason. China has already prepared itself. When the last mile problem is solved, the users will fully explode. Long ago, every telecom carrier started to strictly implement the network device procurement requirements that network devices must support IPv6 such as the IPv6 Ready Logo testing and certificating which can satisfy this requirement. However, the CE (home gateways and wireless routers, etc.) purchased by users themselves mostly do not support IPv6, which caused the last mile problem. “When IPv6 is still burgeoning, it is hard to require the vendors and users to have the devices with IPv6-enabled and IPv6-certified. The enterprises produce mature CE Routers (Customer Edge Router, home gateway routers)that support IPv6 do not launch their products to the Chinese market in that customers do not have demand for IPv6. This has become the narrowest bottleneck that hinders the development of IPv6 fixed line users.” said the Director of the BII-SDNCTC Li Zhen with reference to the fixed line IPv6 development. In the upcoming era of IoT, more and more devices need to be connected, and the home gateway CE routers, as the switch center of home network information and data, needs full support for IPv6. From another perspective, it can also be seen that the home gateways have won enough attention to IPv6. On March 19th 2014, international IPv6 organization IPv6 Forum and IPv6 Ready Logo committee officially announced the initiation of the IPv6 Ready CE Router Logo conformance and interoperability testing and certificating program, which marks the full support from brand-new CE Router certificating program of next generation Ipv6 deployment and commercialization. According to the statistics from IPv6 Forum, at present, there are 3000 network devices that passed the Ipv6 Ready certification. The rate of supporting IPv6 is very high. But when it comes to the home gateway CE devices, the next CE scaling testing program CE Router under the framework of IPv6 Ready Logo, only 17 devices from US Netgear, ZTE, Broadcom, etc. have passed IPv6 Ready Logo certification. As the key to access to the last mile of IPv6 in the households, the Chinese market for routing devices bears great potential. The CE Router certified devices will have stronger competitive edge to take hold of vantage ground in the next generation network deployment and commercialization. According to the Global IPv6 Testing Center, the devices to be certified by CE Router Logo are the smart home gateways, such as the home routers, wireless routers, GPON&EPON end devices, etc. The testing content covers the core protocols (Phase-2 enhanced certificating), all the tests in DHCPv6 and RFC084. Compared to other certifications (Core, DHCPv6, Ipsecv6, SNMPv6), the certification is highly targeted at devices and much stricter. In the future, more CE routers will be certified by IPv6 and the seamless deployment of home IPv6 will be gradually realized to solve the last mile problem of the access to IPv6 by carriers. This will have far-[...]



Lessons Learned from Harvey and Irma

2017-09-09T15:28:00-08:00

One of the most intense natural disasters in American history occurred last week. Hurricane Harvey challenged the state of Texas, while Florida braced for Irma. As with all natural disasters in this country Americans are known to bond during times of crisis and help each other during times of need. Personally, I witnessed these behaviors during the 1989 quake in San Francisco. You may wish to donate or get involved with hurricane Harvey relief to help the afflicted. That's great, but as we all know, we should be wary of who we connect with online. Scammers are using Hurricane Harvey and Irma relief efforts as con games and, even more despicably, as phishbait. The FTC warned last week that there are many active relief scams in progress and noted that there always seems to be a spike in registration of bogus domains. If you doubt a charity you are not familiar with, you are wise to think before you give. We recommend you do some common sense vetting and donate through a charities you can verify. Even better, check out the Wise Giving Alliance from the Better Business Bureau, a tool to verify legitimate charities. In this article, we focus on a group of shameless miscreants that are profiting from the misfortune of others during times of crisis and natural disasters. We illuminate the intensity of malicious domains which were created in the days before and after disasters like Hurricane Harvey and Irma. Finally, we address what we can learn during these difficult times. The intensity of malicious domains creation during and several days after Hurricane Harvey is appalling. On August 30th alone, several hundred domains were created with the term "harvey" in them. While not all of the registrants had malicious intent, I'm betting at least a small percentage of them did. Their goal was to extort money, data, or both from innocent victims who happened to be in harm's way, as well as from good Samaritans whose compassion for the victims made them vulnerable. On searches of "Harvey" and "Irma" related domains, between August 28th and September 8th, thousands of such domains were created. That does not even take into account homoglyphs which will be further outlined in this article. The domain names fall into four broad categories: Legal / Insurance such as Attorney, Lawyer, Claims. Rebuilding such as Roofing, Construction. Storm tracking such as WILLHURRICANEIRMAHIT.US New or fraudulent charities using terms such as Relief, Project, Victims, Help. The legal / insurance terms are registered a year or more in advance for every hurricane name listed. You can see a full list of future hurricane names here, listed by the National Hurricane Center. By pivoting on the name servers or registrant data, we can see the same actors register all those domains far ahead of time. This infographic shows words that appear in domains registered in Aug and Sept so far that related to hurricane, harvey or irma. When crises strike, one needs the best tools plus a well-trained team that knows how to maximize your use of this exceptional data. Utilizing DNS techniques that can help your company avoid onboarding fraudulent fundraisers and profiteering opportunists is vital to protecting your company reputation and the reputation of your outbound IP address ranges. Here's a deep dive tip that few companies have discovered, but all can apply: As one part of the recursive "domain name resolution" process, the TLD registry zone file connects each domain name to authoritative name server hosts, and each authoritative name server host to an IP address. Starting with one known malicious domain name — or one of your customer domains you are vetting — you can find other domains [...]



The One Reason Net Neutrality Can't Be Implemented

2017-09-08T10:11:00-08:00

Suppose for a moment that you are the victim of a wicked ISP that engages in disallowed "throttling" under a "neutral" regime for Internet access. You like to access streaming media from a particular "over the top" service provider. By coincidence, the performance of your favoured application drops at the same time your ISP launches a rival content service of its own. You then complain to the regulator, who investigates. She finds that your ISP did indeed change their traffic management settings right at the point that the "throttling" began. A swathe of routes, including the one to your preferred "over the top" application, have been given a different packet scheduling and routing treatment. It seems like an open-and-shut case of "throttling" resulting in a disallowed "neutrality violation". Or is it? Here's why the regulator's enforcement order will never survive the resulting court case and expert witness scrutiny. The regulator is going to have to prove that the combination of all of the network algorithms and settings intentionally resulted in a specific performance degradation. This is important because in today's packet networks performance is an emergent phenomenon. It is not engineered to known safety margins, and can (and does) shift continually with no intentional cause. That means it could just be a coincidence that it changed at that moment. (Any good Bayesian will also tell you that we're assuming a "travesty of justice" prior.) What net neutrality advocates are implicitly saying is this: by inspecting the code and configuration (i.e. more code) of millions of interacting local processes in a network, you can tell what global performance is supposed to result. Furthermore, that a change is one of those settings deliberately gave a different and disallowed performance, and you can show it's not mere coincidence. In the 1930s, Alan Turing proved that you can't even (in general) inspect a single computational process and tell whether it will stop. This is called the Halting Problem. This is not an intuitive result. The naive observer without a background in computer science might assume it is trivially simple to inspect an arbitrary program and quickly tell whether it would ever terminate. What the telco regulator implementing "neutrality" faces is a far worse case: the Performance Problem. Rather than a single process, we have lots. And instead of a simple binary yes/no to halting, we have a complex multi-dimensional network and application performance space to inhabit. I hardly need to point out the inherently hopeless nature of this undertaking: enforcing "neutrality" is a monumental misunderstanding of what is required to succeed. Yet the regulatory system for broadband performance appears to have been infiltrated and overrun by naive observers without an undergraduate-level understanding of distributed computing. Good and smart people think they are engaged in a neutrality "debate", but the subject is fundamentally and irrevocably divorced from technical reality. There's not even a mention of basic ideas like non-determinism in the academic literature. It's painful to watch this regulatory ship of fools steam at full speed for the jagged rocks of practical enforcement. It is true that the Halting Problem can be solved in limited cases. It is a real systems management issue in data centres, and a lot of research work has been done to identify those cases. If some process has been running for a long time, you don't want it sitting there consuming electricity forever with no value being created. Likewise, the Performance Problem can be solved in limited cases. However, the regulator is not in a position to [...]



Fact Checking the Recent News About Google in Cuba

2017-09-07T14:52:00-08:00

The Cuban Internet is constrained by the Cuban government and to a lesser extent the US government, not Google. Google's Cuba project has been in the news lately. Mary Anastasia O'Grady wrote a Wall Street Journal article called "Google's Broken Promise to Cubans," criticising Google for being "wholly uninterested in the Cuban struggle for free speech" and assisting the Castro government. The article begins by taking a shot at President Obama who "raved" about an impending Google-Cuba deal "to start setting up more Wi-Fi access and broadband access on the island." (The use of the word "raved" nearly caused me to dismiss the article and stop reading, but I forced myself to continue). The next paragraph tells us "Google has become a supplier of resources to the regime so that Raúl Castro can run internet (sic) at faster speeds for his own purposes." The article goes on to tell us that Brett Perlmutter of Google "boasted" that Google was "thrilled to partner" with a regime-owned museum, featuring a Castro-approved artist. (Like "raved," the use of the word "boasted" seemed Trump-worthy, but I kept reading). O'Grady also referred to a July 2015 Miami Herald report that Perlmutter had pitched a proposal to build an island-wide digital infrastructure that the Cuban government rejected. Next came the buried lead — it turns out this article was precipitated by blocked Cuban access to the pro-democracy Web site Cubadecide.org. Perlmutter tweeted that the site was blocked because of the US embargo on Cuba. Well, that is enough. Let's do some fact checking. President Obama's "raving:" It is true that President Obama made a number of (in retrospect) overly-optimistic predictions during his Cuba trip, but the use of the word "raving" and the obligatory shot at President Obama were clues that O'grady might not be impartial and objective. Google as a supplier of resources: This presumably is a reference to Google's caching servers in Cuba. While these servers marginally speed access to Google applications like Gmail and YouTube, it is hard to see how that helps Raul Castro. It has been reported that Cuba agreed "not censor, surveil or interfere with the content stored" on Google's caching servers. Furthermore, Gmail is encrypted and YouTube is open to all comers — for and against the Cuban government. Brett Perlmutter's boasting: about partnering with a Cuban artist's installation of a free WiFi hotspot. I agree that the WiFi hotspot at the studio of the Cuban artist Kcho is an over-publicized drop in the bucket — much ado about not much. Google's rejected offer of an island-wide digital infrastructure: I have seen many, many (now I'm channeling Trump) references to this "offer," but have no idea what was offered. Google won't tell me and I've seen no documentation on the offer. Google's blocking of Cubadecide.org: It is true that Google blocks access to Cubadecide.org. Furthermore, they block access from Cuba to all sites that are hosted on their infrastructure. Microsoft also blocks Cuban access to sites they host; however, Amazon and Rackspace do not. Cubadecide.org could solve their problem by moving their site to Amazon, Rackspace or a different hosting service that does not block Cuban access. Perlmutter blames the embargo: I don't want to give Google a pass on this. The next question is "why does Amazon allow Cuban access and Google does not?" They are both subject to the same US laws. IBM is a more interesting case — they did not block access at first but changed their policy later. There may be some reason for IBM and Google behaving differently than Amazon and Racks[...]



Fighting Phishing with Domain Name Disputes

2017-09-07T08:08:00-08:00

I opened an email from GoDaddy over the weekend on my phone. Or so I initially thought. I had recently helped a client transfer a domain name to a GoDaddy account (to settle a domain name dispute), so the subject line of the email — "Confirm this account" — simply made me think that I needed to take another action to ensure everything was in working order. But quickly, my radar went off. Something was amiss: Phishing email not from GoDaddyThe "to" line was blank, which meant that I had been bcc'd on the email. The sender's name was "Go Daddy" (with a space that the Internet's popular registrar doesn't really have). Although the body of the email contained the GoDaddy logo, the footer of the email referred to "Godaddy" (without a space but with a lowercase "D" that is not consistent with the registrar's style). Upon actually reading the email, I immediately noticed the multiple grammatical errors in the first sentence: "Our records shows your account details is incomplete." Because I was looking at the email on my phone instead of on a computer, I couldn't readily identify the link behind the prominent "Verify Now" button. But later, once I was in front of a PC, I saw that the link was not to GoDaddy at all. Fortunately, I didn't click the link until now, as I am writing this blog post. At the moment, it leads to a web page that says, "This Account has been suspended." Phishing for Info If I had clicked the link when I received the email, I suspect I would have been taken to a page that looked like GoDaddy's website and would have been prompted to enter my username and password. Doing so, of course, would have disclosed that sensitive information to someone else — someone phishing for exactly that information — which would have compromised everything in my account. Fortunately, as far as I know, I've never clicked on a phishing link — or, if I have, I've never disclosed personal credentials. But phishing scams seem to be getting more common and more sophisticated. And if I — a savvy computer user and domain name attorney — have to think twice before not clicking on a deceptive link, I can only imagine how many other people (hello, Mom?) must actually click on those links without giving it a second thought. I realize this is really not new. But it underscores the importance of domain name disputes and how companies can use the Uniform Domain Name Dispute Resolution Policy (UDRP) and other tools to combat phishing as a way to protect their customers. Google's Phishing Fights Just days after my "GoDaddy" experience, I read a UDRP decision involving a complaint brought by Google for the domain name . According to the decision: Complainant [Google] argues that Respondent engages in a phishing scheme to obtain personal information for users.... Complainant claims that the login information contained on the resolving webpage [associated with the domain name ] does not actually function, but rather Respondent uses it to obtain personal information from users. The UDRP panel had no problem finding that this conduct constituted "bad faith" under the policy, and that Google had satisfied the UDRP's other two elements as well, and it ordered the domain name transferred to Google. Screenshot of web page at www.web-account-google.com (captured September 5, 2017)However, as of this writing, the UDRP decision had not yet been implemented, so, naturally, I went to see what the web page looked like using this domain name, that is, the page at www.web-account-google.com. As the image h[...]



Making Sense of the Domain Name Market - and Its Future

2017-09-05T14:13:00-08:00

With ever more TLDs, where does it make sense to focus resources? After four years and a quadrupling of internet extensions, what metrics continue to make sense in the domain name industry? Which should we discard? And how do you gain understanding of this expanded market? For registries, future success is dependent on grasping the changes that have already come. For registrars, it is increasingly important to identify winners and allocate resources accordingly. The question is: how? The biggest barrier to both these goals, ironically, may be the industry's favorite measure: the number of registrations. Since the earliest days, registrations have been the main marker of success: Who's up? Who's down? Who's in the top 10? Top five? But even when this approach made sense, it relied on ignoring the elephant in the room: dot-com. The Verisign dot-com beast remains six times larger than the next largest TLD. But, for a long time, the fact that most of the other gTLDs and ccTLDs (and even sTLDs) were clumped closely together made registration figures the go-to metric. Except now, in 2017, the same extreme of scale as dot-com to the others now exists at the other end of the market. There are more than a thousand new gTLDs in the root, but even the largest of them barely touch legacy gTLDs or ccTLDs in terms of numbers of registrations. It may be time to rethink how we look at the market. Another traditional measure has been the number, or percentage, of parked domains. It used to be that if a domain owner wasn't actually using their domain to host a website, it was a sign the registration was more likely to be dropped or was purely speculative. But do parked domains still tell that story? That parked domain is often intellectual property protection. It may be part of a planned online expansion. And while assumed to be speculative, often that parked domain is renewed again and again. This is especially true with older registries. You could argue that in terms of a registry's inherent value, a parked domain that is held by a single owner for many years is more valuable than one with a website that changes hands every year. Maybe we need to consider more than just whether a domain has a website attached and start digging into the history of its registration. Intertwined The truth is that the domain name market has been around for a relatively long time now and has become more complex and intertwined with the larger economy than we give it credit for. The market is also unusual in that it has not grown according to demand but in fits and starts, defined by and dependent on the arcane processes and approvals of overseeing body ICANN. Dot-com is the giant of the internet because it was the only openly commercial online space available at a time when the internet's potential was first realized by businesses and entrepreneurs. Even now the ending ".com" in many ways defines the global address system. While its growth has slowed, it still towers over every other TLD. Then came small bursts of new gTLDs, joined by more commercialized ccTLDs, which all benefitted from the globalization of the internet. Most of them are roughly the same size: between two and five million registrations. And now comes the new wave of TLDs that has produced a third block of registries: with registrations largely ranging from one thousand to one million. These three-time periods tell a story about the domain name market: that for all its fluidity and its speed, the market is not only stable but also segmented. There is no point in Germany's dot-de dreaming of becoming the s[...]



Security is a System Property

2017-09-05T13:09:00-08:00

There's lots of security advice in the press: keep your systems patched, use a password manager, don't click on links in email, etc. But there's one thing these adages omit: an attacker who is targeting you, rather than whoever falls for the phishing email, won't be stopped by one defensive measure. Rather, they'll go after the weakest part of your defenses. You have to protect everything — including things you hadn't realized were relevant. Security is a systems problem: everything matters, including the links between the components and even the people who use the system. Passwords are a good illustration of this point. We all know the adage: "pick strong passwords". There are lots of things wrong with this and other simplistic advice with passwords, but we'll ignore most of them to focus on the systems problem. So: what attacks do strong passwords protect against? The original impetus for this advice came from a 1979 paper by Bob Morris and Ken Thompson. (Morris later became Chief Scientist of the NSA's National Computer Security Center; Thompson is one of the creators of Unix.) When you read it carefully, you realize that strong passwords guard against exactly two threats: someone who tries to login as you, and someone who has hacked the remote site and is trying to guess your password. But strong passwords do nothing if your computer (in those days, computer terminal...) is hacked, or if the line is tapped, or if you're lured to a phishing site and send your password, in the clear, to an enemy site. To really protect your password, then, you need to worry about all of those factors and more. It's worth noting that Morris and Thompson understood this thoroughly. Everyone focuses on the strong password part, and — if they're at least marginally competent — on password salting and hashing, but few people remember this quote, from the first page of the paper: Remote-access systems are peculiarly vulnerable to penetration by outsiders as there are threats at the remote terminal, along the communications link, as well as at the computer itself. Although the security of a password encryption algorithm is an interesting intellectual and mathematical problem, it is only one tiny facet of a very large problem. In practice, physical security of the computer, communications security of the communications link, and physical control of the computer itself loom as far more important issues. Perhaps most important of all is control over the actions of ex-employees, since they are not under any direct control and they may have intimate knowledge about the system, its resources, and methods of access. Good system security involves realistic evaluation of the risks not only of deliberate attacks but also of casual authorized access and accidental disclosure. (True confession: I'd forgotten that they noted the scope of the problem, perhaps because I first read that paper when it originally appeared.) I bring this up now because of some excellent reporting about hacking and the 2016 election. Voting, too, is a system — it's not just voting machines that are targets, but rather, the entire system. This encompasses registration, handling of the "poll books" — which may themselves be computerized — the way that poll workers sign in voters, and more. I'll give an example, from the very first time I could vote in a presidential election: the poll workers couldn't find my registration card. I was sent off to a bank of phones to try to call the county election board. The board had far too few phone lines, so I kept[...]



Global Content Removals Based on Local Legal Violations - Where are we Headed?

2017-09-05T12:42:00-08:00

Excerpt from my Internet Law casebook discussing transborder content removal orders, including the Equustek case. From the Internet's earliest days, the tension between a global communication network and local geography-based laws has been obvious. One scenario is that every jurisdiction's local laws apply to the Internet globally, meaning that the country (or sub-national regulator) with the most restrictive law for any content category sets the global standard for that content. If this scenario comes to pass, the Internet will only contain content that is legal in every jurisdiction in the world — a small fraction of the content we as Americans might enjoy, because many countries restrict content that is clearly legal in the U.S. Perhaps surprisingly, we've generally avoided this dystopian scenario — so far. In part, this is because many major Internet services create localized versions of their offerings that conform to local laws, which allows the services to make country-by-country removals of locally impermissible content. Thus, the content on google.de might vary pretty substantially from the content on google.com. This localization undermines the 1990s utopian vision that the Internet would enable a single global content database that everyone in the world could uniformly enjoy. However, service localization has also forestalled more dire regulatory crises. So long as google.de complies with local German laws and google.com complies with local U.S. laws, regulators in the U.S. and Germany should be OK...right? Increasingly, the answer appears to be "no." Google's response to the European RTBF rule has highlighted the impending crisis. In response to the RTBF requirement that search engines to remove certain search results associated with their names, initially Google only de-indexed results from its European indexes, i.e., Google would scrub the results from Google.de but not Google.com. However, European users of Google can easily seek out international versions of Google's search index. An enterprising European user could go to Google.com and obtain unscrubbed search results — and compare the search results with the localized edition of Google to see which results had been scrubbed. The French Commission Nationale de l'Informatique et des Libertés (CNIL) has deemed this outcome unacceptable. As a result, it has demanded that Google honor an RTBF de-indexing request across all of its search indexes globally. In other words, if a French resident successfully makes a de-indexing request under European data privacy laws, Google should not display the removed result to anyone in the world, even searchers outside of Europe who are not subject to European law. The CNIL's position is not unprecedented; other governmental agencies have made similar demands for the worldwide suppression of content they object to. However, the demand on Google threatens to break the Internet. Either Google must cease all of its French operations to avoid being subject to the CNIL's interpretation of the law, or it must give a single country the power to decide what content is appropriate for the entire world — which, of course, could produce conflicts with the laws of other countries. Google proposed a compromise of removing RTBF results from its European indexes, and if a European attempts to log into a non-European version of Google's search index, Google will dynamically scrub the results it delivers to the European searcher. As a result, if the European searcher tries to get around the Eur[...]