Subscribe: CircleID
http://www.circleid.com/rss/rss_intblog/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
access  amazon  circleid twittermore  devices  domain  equifax  google  icann  information  ipv  network  new  people  testing 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2017-09-20T07:29:00-08:00

 



Spanish Police Raid the Offices of .cat gTLD Registry

2017-09-20T07:29:00-08:00

(image) Photo posted by Fundació puntCAT‏ during the raid.The offices of the .cat gTLD registry Fundació puntCAT were raided by the Spanish police this morning. The company reported the incident via a series of tweets as the raid was being carried out. "Right now spanish police @guardiacivil is doing an intervention in our office @ICANN," was tweeted just about 4 hours ago followed by another tweet reporting that the police was headed to CTO's home. "We're wating for him to arrive to our office to start the intervention."

Michele Neylon writes: "The move comes a couple of days after a Spanish court ordered the domain registry to take down all .cat domain names being used by the upcoming Catalan referendum. The .cat domain registry currently has over 100 thousand active domain names, and in light of the actions taken by the Spanish government, it's unclear how the registry will continue to operate if their offices are effectively shutdown by the Spanish authorities. The seizure won't impact live domain names or general day to day operations by registrars, as the registry backend is run by CORE and leverages global DNS infrastructure. However, it is deeply worrying that the Spanish government's actions would spill over onto an entire namespace."

Follow CircleID on Twitter

More under: Registry Services, Top-Level Domains




The Madness of Broadband Speed Tests

2017-09-19T10:55:00-08:00

The broadband industry has falsely sold its customers on "speed", so unsurprisingly "speed tests" have become an insane and destructive benchmark. As a child, I would go to bed, and sometimes the garage door would swing open before I went to sleep. My father had come home early from the late shift, where he was a Licensed Aircraft Maintenance Engineer for British Airways. I would wait for him eagerly, and he would come upstairs, still smelling of kerosene and Swarfega, With me lying in bed, he would tell me tales of his work, and stories about the world. Just don't break the wings off as you board!Funnily enough, he never told me about British Airways breaking the wings off its aircraft. You see, he was involved in major maintenance checks on Boeing 747s. He joined BOAC in 1970 and stayed with the company for 34 years until retirement. Not once did he even hint at any desire for destructive testing for aircraft. Now, when a manufacturer makes a brand new airplane type, it does test them to destruction. Here's a picture I shamelessly nicked showing the Airbus A350 wing flex test. I can assure you, they don't do this in the British Airways hangars TBJ and TBK at Hatton Cross maintenance base at Heathrow. Instead, they have non-destructive testing using ultrasound and X-rays to look for cracks and defects. So what's this all got to do with broadband? Well, we're doing the equivalent of asking the customers to break the wings off every time they board. And even worse, our own engineers have adopted destructive testing over non-destructive testing! Because marketing departments at ISPs refuse to define what experience that actually intends to deliver (and what is unreasonable to expect), the network engineers are left with a single and simple marketing requirement: "make it better than it was". When you probe them on what this means, they shrug and tell you "well, we're selling all our products on peak speed, so we try to make the speed tests better". This, my friends, is bonkers. The first problem is that the end users are conducting a denial-of-service attack on themselves and their neighbours. A speed test deliberately saturates the network, placing it under maximum possible stress. The second problem is that ISPs themselves have adopted speed tests internally, so they are driving mad levels of cost carrying useless traffic designed to over-stress their network elements. Then to top it all, regulators are encouraging speed tests as a key metric, deploying huge numbers of boxes hammering the broadband infrastructure even in its most fragile peak hour. The proportion of traffic coming from speed tests is non-trivial. So what's the alternative? Easy! Instead of destructive testing, do non-destructive testing. We know how to X-ray a network, and the results are rather revealing. If you use the right metrics, you can also model the performance limits of any application from the measurements you take. Even a speed test! So you don't need to snap the wings off your broadband service every time you use it after all. I think I'll tell my daughters at their next bedtime. It's good life guidance. Although I can imagine my 14 year old dismissing it as another embarrassing fatherly gesture and uninteresting piece of parental advice. Sometimes it takes a while to appreciate our inherited wisdom. Written by Martin Geddes, Founder, Martin Geddes Consulting LtdFollow CircleID on TwitterMore under: Access Providers, Broadband, Telecom [...]



EFF Resigns from World Wide Web Consortium (W3C) over EME Decision

2017-09-19T07:36:00-08:00

In an open letter to the World Wide Web Consortium (W3C), the Electronic Frontier Foundation (EFF) announced on Tuesday that it is resigning from World Wide Web Consortium (W3C) in response to the organization publishing Encrypted Media Extensions (EME) as a standard. From the letter: "In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing "Encrypted Media Extensions," an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties. ... We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they'll be able to ensure no one ever subjects them to the same innovative pressures."

Follow CircleID on Twitter

More under: Cybersecurity, Policy & Regulation, Privacy, Web




Net Neutrality Advocates Planning Two Days of Protest in Washington DC

2017-09-18T09:53:00-08:00

A coalition of activists and consumer groups are planning to gather in Washington, DC to meet directly with the members of Congress, as they protest plans to defang regulations meant to protect an open internet.

The event organizer, Fight for the Future, is running a dedicated website 'battleforthenet.com/dc' in which it states in part: "On September 26-27 Internet users from across the country will converge on Washington, DC to meet directly with their members of Congress, which is by far the most effective way to influence their positions and counter the power of telecom lobbyists and campaign contributions. ... The only thing that can stop them is a coordinated grassroots effort of constituents directly pressuring our members of Congress, who have the power to stop the FCC and vote down bad legislation."

Participating organizations in the protest include Fight for the Future, Public Knowledge, EFF, Center for Media Justice, Common Cause, Consumers Union, Free Press and the Writers Guild of America West. See additional report by Dominic Rushe in The Guardian.

Follow CircleID on Twitter

More under: Net Neutrality, Policy & Regulation




Forty Percent of New Generic TLDs Shrinking, According to Domain Incite Analysis

2017-09-18T08:39:00-08:00

Forty percent of non-brand new gTLDs are shrinking, reports Kevin Murphy in Domain Incite: "According to numbers culled from registry reports, 172 of the 436 commercial gTLDs we looked at had fewer domains under management at the start of June than they did a year earlier. ... As you might expect, registries with the greatest exposure to the budget and/or Chinese markets were hardest hit over the period. .wang, .red, .ren, .science and .party all saw DUM decline by six figures. Another 27 gTLDs saw declines of over 10,000 names."

Follow CircleID on Twitter

More under: Domain Names, Registry Services, Top-Level Domains




Preliminary Thoughts on the Equifax Hack

2017-09-17T10:08:00-08:00

As you've undoubtedly heard, the Equifax credit reporting agency was hit by a major attack, exposing the personal data of 143 million Americans and many more people in other countries. There's been a lot of discussion of liability; as of a few days ago, at least 25 lawsuits had been filed, with the state of Massachusetts preparing its own suit. It's certainly too soon to draw any firm conclusions about who, if anyone, is at fault — we need more information, which may not be available until discovery during a lawsuit — but there are a number of interesting things we can glean from Equifax's latest statement. First and foremost, the attackers exploited a known bug in the open source Apache Struts package. A patch was available on March 6. Equifax says that their "Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." The obvious question is why this particular system was not patched. One possible answer is, of course, that patching is hard. Were they trying? What does "took efforts to identify and to patch" mean? Were the assorted development groups actively installing the patch and testing the resulting system? It turns out that this fix is difficult to install: You then have to hope that nothing is broken. If you're using Struts 2.3.5 then in theory Struts 2.3.32 won't break anything. In theory it's just bug fixes and security updates, because the major.minor version is unchanged. In theory. In practice, I think any developer going from 2.3.5 to 2.3.32 without a QA cycle is very brave, or very foolhardy, or some combination of the two. Sure, you'll have your unit tests (maybe), but you'll probably need to deploy into your QA environment and do some kind of integration testing too. That's assuming, of course, that you have a compatible QA environment within which you can deploy your old, possibly abandoned application. Were they trying hard enough, i.e., devoting enough resources to the problem? Ascertaining liability here — moral and/or legal — can't be done without seeing the email traffic between the security organization and the relevant development groups; you'd also have to see the activity logs (code changes, test runs, etc.) of these groups. Furthermore, if problems were found during testing, it might take quite a while to correct the code, especially if there were many Struts apps that needed to be fixed. As hard as patching and testing are, though, when there are active exploitations going on you have to take the risk and patch immediately. That was the case with this vulnerability. Did the Security group know about the active attacks or not? If they didn't, they probably aren't paying enough attention to important information sources. Again, this is information we're only likely to learn through discovery. If they did know, why didn't they order a flash-patch? Did they even know which systems were vulnerable? Put another way, did they have access to a comprehensive database of hardware and software systems in the company? They need one — there are all sorts of other things you can't do easily without such a database. Companies that don't invest up front in their IT infrastructure will hurt in many other ways, too. Equifax has a market capitalization of more than $17 billion; they don't really have an excuse for not running a good IT shop. It may be, of course, that Equifax knew all of that and still chose to leave the vulnerable servers up. Why? Apparently, the vulnerable machine was their "U.S. online dispute portal". I'm pretty certain that they're required by law to have a dispute mechanism, and while it probably doesn't have to be a website (and some people suggest that complainants shouldn't use it anyway), it's almost certainly a much cheaper way to receive disputes than is paper mail. That opens the possibility t[...]



China to Create National Cyberattack Database

2017-09-15T13:43:00-08:00

China has revealed plans to create a national data repository for information on cyberattacks and will require telecom firms, internet companies and domain name service providers to report threats to it. Reuters reports: "The Ministry of Industry and Information Technology (MIIT) said companies and telcos as well as government bodies must share information on incidents including Trojan malware, hardware vulnerabilities, and content linked to "malicious" IP addresses to the new platform. An MIIT policy note also said that the ministry, which is creating the platform, will be liable for disposing of threats under the new rules, which will take effect on Jan. 1."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, Policy & Regulation, Registry Services, Telecom




Bluetooth-Based Attack Vector Dubbed "BlueBorne" Exposes Almost Every Connected Device

2017-09-15T13:30:00-08:00

New discovery of a set of zero-day Bluetooth-related vulnerabilities can affect billions of devices in use today. Security firm, Armis Labs, has revealed a new attack vector that can target major mobile, desktop, and IoT operating systems, including Android, iOS, Windows, and Linux, and the devices using them. The new vector named "BlueBorne", as it spread through the air (airborne) and attacks devices via Bluetooth.

No pairing required: "BlueBorne is an attack vector by which hackers can leverage Bluetooth connections to penetrate and take complete control over targeted devices. BlueBorne affects ordinary computers, mobile phones, and the expanding realm of IoT devices. The attack does not require the targeted device to be paired to the attacker's device, or even to be set on discoverable mode."

— "The BlueBorne attack vector has several qualities which can have a devastating effect when combined. By spreading through the air, BlueBorne targets the weakest spot in the networks' defense — and the only one that no security measure protects. Spreading from device to device through the air also makes BlueBorne highly infectious. Moreover, since the Bluetooth process has high privileges on all operating systems, exploiting it provides virtually full control over the device."

Vulnerabilities found in Android, Microsoft, Linux and iOS versions pre-iOS 10. "Armis reported the vulnerabilities to Google, Microsoft, and the Linux community. Google and Microsoft are releasing updates and patches on Tuesday, September 12. Others are preparing patches that are in various stages of being released."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware, Mobile Internet, Wireless




U.S. Navy Investigating Possibility of Cyberattack Behind Two Navy Destroyer Collisions

2017-09-15T12:53:00-08:00

(image)

Deputy chief of naval operations for information warfare, Vice Adm. Jan Tigh, says the military is investigating the possibility of compromised computer systems behind two U.S. Navy destroyer collisions with merchant vessels that occurred in recent months. Elias Groll reporting in Foreign Policy: "Naval investigators are scrambling to determine the causes of the mishaps, including whether hackers infiltrated the computer systems of the USS John S. McCain ahead of the collision on Aug. 21, Tighe said during an appearance at the Center for Strategic and International Studies in Washington… he Navy has no indication that a cyberattack was behind either of the incidents, but it is dispatching investigators to the McCain to put those questions to rest, she said."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity




In Response to 'Networking Vendors Are Only Good for the Free Lunch'

2017-09-14T15:39:00-08:00

I ran into an article over at the Register this week which painted the entire networking industry, from vendors to standards bodies, with a rather broad brush. While there are true bits and pieces in the piece, some balance seems to be in order. The article recaps a presentation by Peyton Koran at Electronic Arts (I suspect the Register spiced things up a little for effect); the line of argument seems to run something like this — Vendors are only paying attention to larger customers, and/or a large group of customers asking for the same thing; if you are not in either group, then you get no service from any vendor Vendors further bake secret sauce into their hardware, making it impossible to get what you want from your network without buying from them Standards bodies are too slow, and hence useless People are working around this, and getting to the inter-operable networks they really want, by moving to the cloud There is another way: just treat your networking gear like servers, and write your own protocols--after all you probably already have programmers on staff who know how to do this Let's think about these a little more deeply. Vendors only pay attention to big customers and/or big markets. – Ummm… Yes. I do not know of any company that does anything different here, including the Register itself. If you can find a company that actually seeks the smallest market, please tell me about them, so I can avoid their products, as they are very likely to go out of business in the near future. So this is true, but it is just a part of the real world. Vendors bake secret sauce into their hardware to increase their profits. – Well, again… Yes. And how is any game vendor any different, for instance? Or what about an online shop that sells content? Okay, next. Standards bodies are too slow, and hence useless. – Whenever I hear this complaint, I wonder if the person making the complaint has actually ever built a real live running system, or a real live deployed standard that provides interoperability across a lot of different vendors, open source projects, etc. Yes, it often seems silly how long it takes for the IETF to ratify something as a standard. But have you ever considered how many times things are widely implemented and deployed before there is a standard? Have you ever really looked at the way standards bodies work to understand that there are many different kinds of standards, each of which with a different meaning, and that not everything needs to be the absolute tip top rung on the standards ladder to be useful? Have you ever asked how long it takes to build anything large and complicated? I guess we could say the entire open source community is slow and useless because it took many years for even the Linux operating system to be widely deployed, and to solve a lot of problems. Look, I know the IETF is slow. And I know the IETF has a lot more politics than it should. I live both of those things. But I also know the fastest answer is not always the right answer, and throwing away decades of experience in designing protocols that actually work is a pretty dumb idea — unless you really just want to reinvent the wheel every time you need to build a car. In the next couple of sentences, we suddenly find that someone needs to call out the contradiction police, replete in their bright yellow suits and funny hats. Because now it seems people want inter-operable networks without standards bodies! Let make a simple point here many people just do not seem to realize: You cannot have interoperability across multiple vendors and multiple open source projects, without some forum where they can all discuss the best way to do something, and find enough common ground to make their various products inter-operate. I hate to break the news[...]



Abusive and Malicious Registrations of Domain Names

2017-09-14T07:43:00-08:00

When ICANN implemented the Uniform Domain Name Dispute Resolution Policy (UDRP) in 1999, it explained its purpose as combating "abusive registrations" of domain names which it defined as registrations "made with bad-faith intent to profit commercially from others' trademarks (e.g., cybersquatting and cyberpiracy)." (The full statement can be found in the Second Staff Report on Implementation Documents for the Uniform Dispute Resolution Policy, Paragraph 4.1(c)). Bad actors employ a palette of stratagems, such as combining marks with generic qualifiers, truncating or varying marks or by removing, reversing, and rearranging letters within the second level domain (typosquatting). They are costly to police and likelier even more costly to maintain forfeited domain names, but for all the pain they inflict they are essentially plain vanilla irritants. While these kinds of disputes essentially dominate the UDRP docket, there has been an increase in the number of disputes involving malicious registrations. The first instances of "phishing" and "spoofing" appear in a 2005 case, CareerBuilder, LLC v. Stephen Baker, D2005-0251 (WIPO May 6, 2005) in which the Panel found that the "disputed domain name is being used as part of a phishing attack (i.e., using 'spoofed' e-mails and a fraudulent website designed to fool recipients into divulging personal financial data such as credit card numbers, account usernames and passwords, social security numbers, etc.") The quainter forms of abuse are registrants looking to pluck lower hanging fruit. They are so obviously opportunistic respondents don't even bother to appear (they also don't appear with the malicious cases, but for another reason, to avoid identity). The plain vanilla type is represented by such cases as Guess? IP Holder L.P. and Guess? Inc. v. Domain Admin: Damon Nelson — Manager, Quantec LLC, Novo Point LLC, D2017-1350 (WIPO August 24, 2017) () in which Complainant's product line includes "accessories." In these types of cases, respondents are essentially looking for visitors. In contrast, malicious registrations are of the kind described, for example, in Google Inc. v. 1&1 Internet Limited, FA1708001742725 (Forum August 31, 2017) ( in which respondent used the complainant's mark and logo on a resolving website containing offers for technical support and password recovery services, and soliciting Internet users' personal information). . . . Complainant's exhibit 11 displays a malware message displayed on the webpage, which Complainant claims indicates fraudulent conduct. Malicious registrations are a step up in that they introduce a new, more disturbing, and even criminal element into the cyber marketplace. Respondents are not just looking for visitors, they are targeting brands for victims. Their bad faith is more than "profit[ing] commercially from others' trademarks" but operating websites (or using e-mails) as trojan horses. It aligns registrations actionable under the UDRP with conduct policed and prosecuted by governments. The UDRP, then, is not just a "rights protection mechanism." The term "abusive registration" has enlarged in meaning (and, thus, in jurisdiction) to include malicious conduct generally. Total security is a pipe dream. ICANN has working groups devoted to mapping the problem, and there are analytical studies assessing its extent in legacy and new TLDs. Some idea of the magnitude is seen in "Statistical Analysis of DNS Abuse in gTLDs Final Report” commissioned by an ICANN mandated review team, the Competition, Consumer Trust and Consumer Choice Review Team (CCTRT). Incidents of abusive and malicious activity online and radiating out to affect the public offline represent the universe of cyber crime and u[...]



Can Constellations of Internet Routing Satellites Compete With Long-Distance Terrestrial Cables?

2017-09-13T14:16:00-08:00

The goal will be to have the majority of long distance traffic go over this network. —Elon Musk Three companies, SpaceX, OneWeb, and Boeing are working on constellations of low-Earth orbiting satellites to provide Internet connectivity. While all three may be thinking of competing with long, terrestrial cables, SpaceX CEO Elon Musk said "the goal will be to have the majority of long-distance traffic go over this (satellite) network" at the opening of SpaceX's Seattle office in 2015 (video below). SpaceX orbital path schematic, sourceCan he pull that off? Their first constellation will consist of 4,425 satellites operating in 83 orbital planes at altitudes ranging from 1,110 to 1,325 km. They plan to launch a prototype satellite before the end of this year and a second one during the early months of 2018. They will start launching operational satellites in 2019 and will complete the first constellation by 2024. The satellites will use radios to communicate with ground stations, but links between the satellites will be optical. At an altitude of 1,110 kilometers, the distance to the horizon is 3,923 kilometers. That says each satellite will have a line-of-sight view of all other satellites that are within 7,846 kilometers, forming an immense mesh network. Terrestrial networks are not so richly interconnected and cables must zig-zag around continents and islands if undersea and other obstructions if under ground. Latency in a super-mesh of long, straight-line links should be much lower than with terrestrial cable. Additionally, Musk says the speed of light in a vacuum is 40-50 percent faster than in a cable, cutting latency further. Let's look at an example. I traced the route from my home in Los Angeles to the University of Magallanes in Punta Arenas at the southern tip of Chile. As shown here, the terrestrial route was 14 hops and the theoretical satellite link only five hops. (The figure is drawn roughly to scale). So, we have 5 low-latency links versus 14 higher-latency links. The gap may close somewhat as cable technology improves, but it seems that Musk may be onto something. Check out the following video of the speech Musk gave at the opening of SpaceX's Seattle office. His comments about the long-distance connections discussed here come at the three-minute mark, but I'd advise you to watch the entire 26-minute speech: style="margin-bottom:15px;" width="644" height="362" src="https://www.youtube.com/embed/AHeZHyOnsm4?rel=0" frameborder="0" allowfullscreen> Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers, Broadband, Telecom, Wireless [...]



Innovative Solutions for Farming Emerge at the Apps for Ag Hackathon

2017-09-13T09:16:00-08:00

Too often, people consider themselves passive consumers of the Internet. The apps and websites we visit are made by people with technical expertise using languages we don't understand. It's hard to know how to plug in, even if you have a great idea to contribute. One solution for this problem is the hackathon. Entering the Hackathon Arena For the uninitiated, a hackathon is a place of hyper-productivity. A group of people converge for a set period of time, generally a weekend to build solutions to specific problems. Often, the hackathon has an overall goal, like the Sacramento Apps for Ag hackathon. "The Apps for Ag Hackathon was created to bring farmers, technologists, students and others from the agriculture and technology industries together in a vibrant, focused environment to create the seeds of new solutions for farmers using technology," says Gabriel Youtsey, Chief Innovation Officer, Agriculture and Natural Resources. Now in its fourth year, the hackathon was bigger than ever and was held at The Urban Hive in Sacramento, with the pitch presentations taking place during the California State Fair. The event kicked off on Friday evening, with perspectives from a farmer on the challenges for agriculture in California, including labor, water supply, food safety, and pests, and how technology can help solve them. Hackathon participants also had opportunities to get up and talk about their own ideas for apps or other technology-related concepts to solve food and agriculture problems for farmers. From there, teams freely formed based on people's skills and inclinations. Although the hackathon is competitive, there is a great deal of collaboration happening, as people hash out ideas together. The hackathon itself provides tools and direction, and experts provide valuable advice and mentorship. At the end of the event, the teams presented working models of their apps and a slide deck to describe the business plan. Judges then decided who got to go home with the prizes, which often include support like office space, cash, and cloud dollars so that developers can keep building their software. For Entrepreneurs, Newbies, and Techies Alike In late July of this year, three people with very different career backgrounds entered the Apps for Ag Hackathon to dedicate their weekend to building a piece of software. They all walked away with a top prize and a renewed commitment to reimagining how technology can contribute to agriculture and food production. In the room was Sreejumon Kundilepurayil, a hackathon veteran who has worked for tech giants building mobile and software solutions, Scott Kirkland, a UC Davis software developer and gardener, and Heather Lee, a self-described generalist in business and agritourist enthusiast. "I was terrified," Lee shared. "I'm tech capable — I've taken some coding classes — but I had no idea what my role would be. I decided to go and put myself in an uncomfortable position. When I got there, I realized that telling a story was my role." While her team members were mapping out the API and back-end development, Lee was working on the copy, graphics, video, and brand guide. Her idea for a mobile app that connects farmers and tourists for unique day-trips to farms ended up winning third place. First place went to Kundilepurayul and Vidya Kannoly for an app called Dr Green, which will help gardeners and farmers diagnose plant diseases using artificial intelligence and machine learning. Initially built for the Californian market, it will eventually be available globally as the machine gets more and more adept at identifying plants and problems. Through their phone, growers will also have access to a messaging feature to ask questions and get advice[...]



Amazon's Letter to ICANN Board: It's Time to Approve Our Applications for .AMAZON TLDs

2017-09-12T14:54:00-08:00

When ICANN launched the new gTLD program five years ago, Amazon eagerly joined the process, applying for .AMAZON and its Chinese and Japanese translations, among many others. Our mission was — and is — simple and singular: We want to innovate on behalf of our customers through the DNS. ICANN evaluated our applications according to the community-developed Applicant Guidebook in 2012; they achieved perfect scores. Importantly, ICANN's Geographic Names Panel determined that "AMAZON" is not a geographic name that is prohibited or one that requires governmental approval. We sincerely appreciate the care with which ICANN itself made these determinations, and are hopeful that a full approval of our applications is forthcoming. In a letter we sent to the ICANN Board on September 7, 2017 (the full text of which may be found below), we laid out the reasons for why our applications should be swiftly approved now that an Independent Review Process (IRP) panel found in our favor. Our letter highlights the proactive engagement we attempted with the governments of the Amazonia region over a five year period to alleviate any concerns about using .AMAZON for our business purposes. First, we have worked to ensure that the governments of Brazil and Peru understand we will not use the TLDs in a confusing manner. We proposed to support a future gTLD to represent the region using the geographic terms of the regions, including .AMAZONIA, .AMAZONICA or .AMAZONAS. We also offered to reserve for the relevant governments certain domain names that could cause confusion or touch on national sensitivities. During the course of numerous formal and informal engagements, we repeatedly expressed our interest in finding an agreed-upon outcome. And while the governments have declined these offers, we stand by our binding commitment from our July 4, 2013 Public Interest Commitment (PIC) to the .AMAZON applications, which stated that we will limit registration of culturally sensitive terms — engaging in regular conversations with the relevant governments to identify these terms — and formalizing the fact that we will not object to any future applications of .AMAZONAS, .AMAZONIA and .AMAZONICA. We continue to believe it is possible to use .AMAZON for our business purposes while respecting the people, culture, history, and ecology of the Amazonia region. We appreciate the ICANN Board's careful deliberation of our applications and the IRP decision. But as our letter states, approval of our .AMAZON applications by the ICANN Board is the only decision that is consistent with the bottom-up, multistakeholder rules that govern ICANN and the new gTLD program. We urge the ICANN Board to now approve our applications. An ICANN accountable to the global multistakeholder community must do no less. The full text of our letter is below. * * * Dear Chairman Crocker and Members of the ICANN Board of Directors: We write as the ICANN Board considers the July 10, 2017 Final Declaration of the Independent Review Process Panel (IRP) in Amazon EU S.à.r.l. v. ICANN regarding the .AMAZON Applications. Because the Panel concluded that the Board acted in a manner inconsistent with its Bylaws, we ask the Board to immediately approve our long-pending .AMAZON Applications. Such action is necessary because there is no sovereign right under international or national law to the name "Amazon," because there are no well-founded and substantiated public policy reasons to block our Applications, because we are committed to using the TLDs in a respectful manner, and because the Board should respect the IRP accountability mechanism. First, the Board should recognize that the IRP Panel carefully examined[...]



CE Router Certification Opens Up the Last Mile to IPv6 Fixed-Line

2017-09-12T08:08:00-08:00

With reference to IPv6, probably most end users might not have any sense of it. The mainstream parlance in the industry is that network carriers and content and service providers stick to their own arguments. Carriers believe owing to the lack of IPv6 content and service, the demand for IPv6 from the users is very small. The content and service providers hold that users cannot have access to content and service through IPv6 and that why they should provide the service in this background. Dr. Song Linjian of CFIEC stated in the article China, towards fully-connected IPv6 networks that Chicken and Egg paradox between IPv6 networks and content is just temporary and that it surely exists but not the key reason. China has already prepared itself. When the last mile problem is solved, the users will fully explode. Long ago, every telecom carrier started to strictly implement the network device procurement requirements that network devices must support IPv6 such as the IPv6 Ready Logo testing and certificating which can satisfy this requirement. However, the CE (home gateways and wireless routers, etc.) purchased by users themselves mostly do not support IPv6, which caused the last mile problem. “When IPv6 is still burgeoning, it is hard to require the vendors and users to have the devices with IPv6-enabled and IPv6-certified. The enterprises produce mature CE Routers (Customer Edge Router, home gateway routers)that support IPv6 do not launch their products to the Chinese market in that customers do not have demand for IPv6. This has become the narrowest bottleneck that hinders the development of IPv6 fixed line users.” said the Director of the BII-SDNCTC Li Zhen with reference to the fixed line IPv6 development. In the upcoming era of IoT, more and more devices need to be connected, and the home gateway CE routers, as the switch center of home network information and data, needs full support for IPv6. From another perspective, it can also be seen that the home gateways have won enough attention to IPv6. On March 19th 2014, international IPv6 organization IPv6 Forum and IPv6 Ready Logo committee officially announced the initiation of the IPv6 Ready CE Router Logo conformance and interoperability testing and certificating program, which marks the full support from brand-new CE Router certificating program of next generation Ipv6 deployment and commercialization. According to the statistics from IPv6 Forum, at present, there are 3000 network devices that passed the Ipv6 Ready certification. The rate of supporting IPv6 is very high. But when it comes to the home gateway CE devices, the next CE scaling testing program CE Router under the framework of IPv6 Ready Logo, only 17 devices from US Netgear, ZTE, Broadcom, etc. have passed IPv6 Ready Logo certification. As the key to access to the last mile of IPv6 in the households, the Chinese market for routing devices bears great potential. The CE Router certified devices will have stronger competitive edge to take hold of vantage ground in the next generation network deployment and commercialization. According to the Global IPv6 Testing Center, the devices to be certified by CE Router Logo are the smart home gateways, such as the home routers, wireless routers, GPON&EPON end devices, etc. The testing content covers the core protocols (Phase-2 enhanced certificating), all the tests in DHCPv6 and RFC084. Compared to other certifications (Core, DHCPv6, Ipsecv6, SNMPv6), the certification is highly targeted at devices and much stricter. In the future, more CE routers will be certified by IPv6 and the seamless deployment of home IPv6 will be gradually realized to solve the last mile problem[...]



Equifax Breach Blamed on Open-Source Software Flaw

2017-09-11T18:04:01-08:00

Equifax has blamed a flaw in the software running its online databases for the massive breach revealed last week that has allowed hackers to steal personal information of as many as 143 million customers. Kevin Dugan reporting in the New York Post: "Hackers were able to access the info — including Social Security numbers — because there was a flaw in the open-source software created by the Apache Foundation ... STRUTS is a widely available software system that's used by about 65 percent of Fortune 100 companies, including Lockheed Martin, Citigroup, Vodafone, Virgin Atlantic, Reader's Digest, Office Depot, and Showtime — plus the IRS, according to lgtm, a software development group."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity




Lessons Learned from Harvey and Irma

2017-09-09T15:28:00-08:00

One of the most intense natural disasters in American history occurred last week. Hurricane Harvey challenged the state of Texas, while Florida braced for Irma. As with all natural disasters in this country Americans are known to bond during times of crisis and help each other during times of need. Personally, I witnessed these behaviors during the 1989 quake in San Francisco. You may wish to donate or get involved with hurricane Harvey relief to help the afflicted. That's great, but as we all know, we should be wary of who we connect with online. Scammers are using Hurricane Harvey and Irma relief efforts as con games and, even more despicably, as phishbait. The FTC warned last week that there are many active relief scams in progress and noted that there always seems to be a spike in registration of bogus domains. If you doubt a charity you are not familiar with, you are wise to think before you give. We recommend you do some common sense vetting and donate through a charities you can verify. Even better, check out the Wise Giving Alliance from the Better Business Bureau, a tool to verify legitimate charities. In this article, we focus on a group of shameless miscreants that are profiting from the misfortune of others during times of crisis and natural disasters. We illuminate the intensity of malicious domains which were created in the days before and after disasters like Hurricane Harvey and Irma. Finally, we address what we can learn during these difficult times. The intensity of malicious domains creation during and several days after Hurricane Harvey is appalling. On August 30th alone, several hundred domains were created with the term "harvey" in them. While not all of the registrants had malicious intent, I'm betting at least a small percentage of them did. Their goal was to extort money, data, or both from innocent victims who happened to be in harm's way, as well as from good Samaritans whose compassion for the victims made them vulnerable. On searches of "Harvey" and "Irma" related domains, between August 28th and September 8th, thousands of such domains were created. That does not even take into account homoglyphs which will be further outlined in this article. The domain names fall into four broad categories: Legal / Insurance such as Attorney, Lawyer, Claims. Rebuilding such as Roofing, Construction. Storm tracking such as WILLHURRICANEIRMAHIT.US New or fraudulent charities using terms such as Relief, Project, Victims, Help. The legal / insurance terms are registered a year or more in advance for every hurricane name listed. You can see a full list of future hurricane names here, listed by the National Hurricane Center. By pivoting on the name servers or registrant data, we can see the same actors register all those domains far ahead of time. This infographic shows words that appear in domains registered in Aug and Sept so far that related to hurricane, harvey or irma. When crises strike, one needs the best tools plus a well-trained team that knows how to maximize your use of this exceptional data. Utilizing DNS techniques that can help your company avoid onboarding fraudulent fundraisers and profiteering opportunists is vital to protecting your company reputation and the reputation of your outbound IP address ranges. Here's a deep dive tip that few companies have discovered, but all can apply: As one part of the recursive "domain name resolution" process, the TLD registry zone file connects each domain name to authoritative name server hosts, and each authoritative name server host to an IP address. Starting with one known malicious domain name — o[...]



The One Reason Net Neutrality Can't Be Implemented

2017-09-08T10:11:00-08:00

Suppose for a moment that you are the victim of a wicked ISP that engages in disallowed "throttling" under a "neutral" regime for Internet access. You like to access streaming media from a particular "over the top" service provider. By coincidence, the performance of your favoured application drops at the same time your ISP launches a rival content service of its own. You then complain to the regulator, who investigates. She finds that your ISP did indeed change their traffic management settings right at the point that the "throttling" began. A swathe of routes, including the one to your preferred "over the top" application, have been given a different packet scheduling and routing treatment. It seems like an open-and-shut case of "throttling" resulting in a disallowed "neutrality violation". Or is it? Here's why the regulator's enforcement order will never survive the resulting court case and expert witness scrutiny. The regulator is going to have to prove that the combination of all of the network algorithms and settings intentionally resulted in a specific performance degradation. This is important because in today's packet networks performance is an emergent phenomenon. It is not engineered to known safety margins, and can (and does) shift continually with no intentional cause. That means it could just be a coincidence that it changed at that moment. (Any good Bayesian will also tell you that we're assuming a "travesty of justice" prior.) What net neutrality advocates are implicitly saying is this: by inspecting the code and configuration (i.e. more code) of millions of interacting local processes in a network, you can tell what global performance is supposed to result. Furthermore, that a change is one of those settings deliberately gave a different and disallowed performance, and you can show it's not mere coincidence. In the 1930s, Alan Turing proved that you can't even (in general) inspect a single computational process and tell whether it will stop. This is called the Halting Problem. This is not an intuitive result. The naive observer without a background in computer science might assume it is trivially simple to inspect an arbitrary program and quickly tell whether it would ever terminate. What the telco regulator implementing "neutrality" faces is a far worse case: the Performance Problem. Rather than a single process, we have lots. And instead of a simple binary yes/no to halting, we have a complex multi-dimensional network and application performance space to inhabit. I hardly need to point out the inherently hopeless nature of this undertaking: enforcing "neutrality" is a monumental misunderstanding of what is required to succeed. Yet the regulatory system for broadband performance appears to have been infiltrated and overrun by naive observers without an undergraduate-level understanding of distributed computing. Good and smart people think they are engaged in a neutrality "debate", but the subject is fundamentally and irrevocably divorced from technical reality. There's not even a mention of basic ideas like non-determinism in the academic literature. It's painful to watch this regulatory ship of fools steam at full speed for the jagged rocks of practical enforcement. It is true that the Halting Problem can be solved in limited cases. It is a real systems management issue in data centres, and a lot of research work has been done to identify those cases. If some process has been running for a long time, you don't want it sitting there consuming electricity forever with no value being created. Likewise, the Performance Probl[...]



Equifax Hacked, Nearly Half of US Population Affected

2017-09-07T15:37:01-08:00

(image) Rick Smith, Chairman and CEO of Equifax Inc., on cybersecurity incident involving consumer information. Equifax has established a dedicated website, www.equifaxsecurity2017.com, to help consumers determine if their information has been potentially impacted and to sign up for credit file monitoring and identity theft protection.In an announcement today, credit reporting giant Equifax revealed a cybersecurity incident potentially impacting approximately 143 million U.S. consumers. The historic data breach has exposed names, Social Security numbers, birth dates, addresses and, in some instances, driver's license numbers, Equifax said in the statement. "In addition, credit card numbers for approximately 209,000 U.S. consumers, and certain dispute documents with personal identifying information for approximately 182,000 U.S. consumers, were accessed." Equifax has also identified unauthorized access to limited personal information for certain UK and Canadian residents. The company says it has found no evidence of unauthorized activity on Equifax's core consumer or commercial credit reporting databases.

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity




Fact Checking the Recent News About Google in Cuba

2017-09-07T14:52:00-08:00

The Cuban Internet is constrained by the Cuban government and to a lesser extent the US government, not Google. Google's Cuba project has been in the news lately. Mary Anastasia O'Grady wrote a Wall Street Journal article called "Google's Broken Promise to Cubans," criticising Google for being "wholly uninterested in the Cuban struggle for free speech" and assisting the Castro government. The article begins by taking a shot at President Obama who "raved" about an impending Google-Cuba deal "to start setting up more Wi-Fi access and broadband access on the island." (The use of the word "raved" nearly caused me to dismiss the article and stop reading, but I forced myself to continue). The next paragraph tells us "Google has become a supplier of resources to the regime so that Raúl Castro can run internet (sic) at faster speeds for his own purposes." The article goes on to tell us that Brett Perlmutter of Google "boasted" that Google was "thrilled to partner" with a regime-owned museum, featuring a Castro-approved artist. (Like "raved," the use of the word "boasted" seemed Trump-worthy, but I kept reading). O'Grady also referred to a July 2015 Miami Herald report that Perlmutter had pitched a proposal to build an island-wide digital infrastructure that the Cuban government rejected. Next came the buried lead — it turns out this article was precipitated by blocked Cuban access to the pro-democracy Web site Cubadecide.org. Perlmutter tweeted that the site was blocked because of the US embargo on Cuba. Well, that is enough. Let's do some fact checking. President Obama's "raving:" It is true that President Obama made a number of (in retrospect) overly-optimistic predictions during his Cuba trip, but the use of the word "raving" and the obligatory shot at President Obama were clues that O'grady might not be impartial and objective. Google as a supplier of resources: This presumably is a reference to Google's caching servers in Cuba. While these servers marginally speed access to Google applications like Gmail and YouTube, it is hard to see how that helps Raul Castro. It has been reported that Cuba agreed "not censor, surveil or interfere with the content stored" on Google's caching servers. Furthermore, Gmail is encrypted and YouTube is open to all comers — for and against the Cuban government. Brett Perlmutter's boasting: about partnering with a Cuban artist's installation of a free WiFi hotspot. I agree that the WiFi hotspot at the studio of the Cuban artist Kcho is an over-publicized drop in the bucket — much ado about not much. Google's rejected offer of an island-wide digital infrastructure: I have seen many, many (now I'm channeling Trump) references to this "offer," but have no idea what was offered. Google won't tell me and I've seen no documentation on the offer. Google's blocking of Cubadecide.org: It is true that Google blocks access to Cubadecide.org. Furthermore, they block access from Cuba to all sites that are hosted on their infrastructure. Microsoft also blocks Cuban access to sites they host; however, Amazon and Rackspace do not. Cubadecide.org could solve their problem by moving their site to Amazon, Rackspace or a different hosting service that does not block Cuban access. Perlmutter blames the embargo: I don't want to give Google a pass on this. The next question is "why does Amazon allow Cuban access and Google does not?" They are both subject to the same US laws. IBM is a more interesting case — they did not block access at first but changed[...]