Subscribe: CircleID
http://www.circleid.com/rss/rss_all/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
applications  circleid twittermore  devices  domain names  domain  future  icann  internet  ipv  network  new  testing  years 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2017-09-25T13:15:00-08:00

 



The Role of Domain Name Privacy and Proxy Services in URS Disputes

2017-09-25T13:15:00-08:00

Here's another apparent limitation of the Uniform Rapid Suspension System (URS), the domain name dispute policy that applies to the new generic top-level domains (gTLDS): Proceedings are unlikely to unmask cybersquatters hiding behind privacy or proxy services. Domain name registrants often use these privacy and proxy services to hide their identities when they register domain names. The services have legitimate uses but are controversial. In proceedings under the Uniform Domain Name Dispute Resolution Policy (UDRP), the privacy veil is often lifted after a complaint has been filed, allowing a trademark owner to learn the identity of the so-called underlying registrant. Doing so can be beneficial to a trademark owner complainant, creating leverage and possibly leading to further evidence of bad faith or links to additional domain names. At WIPO (the leading provider of UDRP services), a complainant is typically offered an opportunity to amend a complaint after the underlying registrant has been identified during the administrative compliance phase. Here's what WIPO's Overview 3.0 says (in part) on the topic: When provided with underlying registrant information which differs from the respondent named in the complaint, a complainant may either add the disclosed underlying registrant as a co-respondent, or replace the originally named privacy or proxy service with the disclosed underlying registrant. In either event, complainants may also amend or supplement certain substantive aspects of the complaint (notably the second and third elements) in function of any such disclosure. However, the URS — a quicker process that is "not intended for use in any proceedings with open questions of fact, but only clear cases of trademark abuse" — does not provide for such amendments or supplements to a complaint. Indeed, the Forum (the leading provider of URS services) has a supplemental rule that expressly says: "The Complaint may not be amended at any time." As a result, a review of URS cases shows that many identify the respondent only as a privacy or proxy service, such as the popular Domains By Proxy, because the underlying registrant is never disclosed during the course of a URS proceeding. Had the trademark owner elected instead to file a UDRP complaint for the same domain name (which is usually always an option, given that all new gTLDs are subject to both the URS as well as the UDRP), then the record might have identified the underlying registrant rather than the privacy or proxy service. Of course, the URS continues to offer some advantages over the UDRP (notably quicker, less expensive resolutions), but the URS has long been criticized for its shortcomings (such as its ability only to suspend, not transfer, a disputed domain name). Now, it seems that the URS has yet another shortcoming that trademark owners should consider when deciding whether to file a URS or UDRP complaint: If learning a hidden registrant's true identity is important, then a UDRP proceeding might be a better option than the URS. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Domain Names, Law, Policy & Regulation, Privacy, Top-Level Domains [...]



Catalan Government Claims Spanish Online Censorship Breaching EU Laws

2017-09-24T07:50:00-08:00

The Catalan government has written to the European Commission claiming that the Spanish government is in breach of EU law. In a letter from Jordi Puigneró Secretary of Telecommunications, Cybersecurity and the Digital Society at the Government of Catalonia addressed to Andrus Ansip, European Commissioner for Digital Economy and Society, the Catalan government calls out the moves by the Madrid government as censorship. Over the past ten days, the Spanish government has issued court orders to multiple entities including the .cat domain name registry, whose offices were also raided, as well as to Spanish ISPs. The goal being to block access to websites and other content related to the upcoming referendum in Catalonia. The letter, refers to the court order the .cat registry received, which demanded that they block all .cat domain names that "could be about or point to any content related to the referendum". It also cites the worldwide media coverage of the raid on the .cat offices and the blocking of multiple websites (and domains) related to the referendum. Apparently, the court orders being issued to the ISPs in Spain are very broad, as the letter refers to orders blocking access to "all websites publicised by any member of the Catalan government in any social network that has a direct or indirect relation with the referendum without any further court order". How ISPs are meant to implement that kind of court order is beyond me, as it sounds incredibly vague and the judicial equivalent of using a sledgehammer to crack a walnut. Whether the European Commission will make any public comments in reaction to this letter or not is debatable, but the concerns being raised by Jordi Puigneró are ones that are shared by many observers from around the globe. The Spanish government's actions in Catalonia have received widespread criticism from many in civil society including ISOC and the EFF. Written by Michele Neylon, MD of Blacknight SolutionsFollow CircleID on TwitterMore under: Censorship, Internet Governance, Policy & Regulation, Registry Services, Top-Level Domains [...]



What Does the Future Hold for the Internet?

2017-09-22T14:38:00-08:00

Explore the interactive 2017 Global Internet Report: Paths to Our Digital FutureThis is the fundamental question that the Internet Society is posing through the report just launched today, our 2017 Global Internet Report: Paths to Our Digital Future. The report is a window into the diverse views and perspectives of a global community that cares deeply about how the Internet will evolve and impact humanity over the next 5-7 years. We couldn't know what we would find when we embarked on the journey to map what stakeholders believe could shape the future of the Internet, nor can we truly know what will happen to the Internet, but we do now have a sense of what we need to think about today to help shape the Internet of tomorrow. The report reflects the views and aspirations of our community as well as some of the most pressing challenges facing the future of this great innovation. What have we learned? We've learned that our community remains confident that the core Internet values that gave rise to the Internet remain valid. We also heard very strong worries that the user-centric model of the Internet is under extraordinary pressure from governments, from technology giants, and even from the technology itself. There is a sense that there are forces beyond the users' control that may define the Internet's future. That the user may no longer be at the center of the Internet's path. It is, perhaps, trite to say that the world is more connected today than ever before. Indeed, we are only beginning to understand the implications of a hyperconnected society that is dependent on the generation, collection and movement of data in ways that many do not fully understand. The Internet of the future will most certainly enable a host of products and services that could revolutionize our daily lives. At the same time, our dependence on the technology raises a myriad of challenges that society may be ill-equipped to address. Clearly, the Internet is increasingly intertwined with a geopolitical environment that feels uncertain and even precarious. The Internet provides governments with both opportunities to better the lives of their people but also tools for surveillance and even control. This report highlights the serious choices we all must make about how to ensure that rights and freedoms prevail in the Internet of the future. The decisions we make will determine whether humanity remains in the drivers' seat of technology or not. In short, the decisions we make about the Internet can no longer be seen as "separate", as "over there" — the implications of a globally interconnected world will be felt by all of us. And the decisions we make about the Internet will be felt far and wide. We are still just beginning to understand the implications of a globally connected society and what it will mean for individuals, business, government and society at large. How we address the opportunities and challenges that today's forces of change are creating for the future is paramount, but one thing above all others is certain — the choices are ours alone to make, and the future we want is up to us to shape. Explore the interactive 2017 Global Internet Report: Paths to Our Digital Future Written by Sally Shipman Wentworth, VP of Global Policy Development, Internet SocietyFollow CircleID on TwitterMore under: Broadband, Censorship, Cybersecurity, Internet Governance, Internet Protocol, Mobile Internet, Networks, Policy & Regulation, Privacy, Web [...]



Google Global Cache Servers Go Online in Cuba, But App Engine Blocked

2017-09-22T11:28:00-08:00

I had hoped to get more information before publishing this post, but difficult Internet access in Cuba and now the hurricane got in the way — better late than never. Cuban requests for Google services are being routed to GCC servers in Cuba, and all Google services that are available in Cuba are being cached — not just YouTube. That will cut latency significantly, but Cuban data rates remain painfully slow. My guess is that Cubans will notice the improved performance in interactive applications, but maybe not perceive much of a change when watching a streaming video. Note the italics in the above paragraph — evidently, Google blocks access to their App Engine hosting and application development platform. Cuban developers cannot build App Engine applications, and Cubans cannot access applications like the Khan Academy or Google's G-Suite. [...]



Networks Are Not Cars Nor Cell Phones

2017-09-21T09:24:00-08:00

The network engineering world has long emphasized the longevity of the hardware we buy; I have sat through many vendor presentations where the salesman says "this feature set makes our product future proof! You can buy with confidence knowing this product will not need to be replaced for another ten years..." Over at the Networking Nerd, Tom has an article posted supporting this view of networking equipment, entitled Network Longevity: Think Car, not iPhone. It seems, to me, that these concepts of longevity have the entire situation precisely backward. These ideas of "car length longevity" and "future proof hardware" are looking at the network from the perspective of an appliance, rather than from the perspective as a set of services. Let me put this in a little bit of context by considering two specific examples. In terms of cars, I have owned four in the last 31 years. I owned a Jeep Wrangler for 13 years, a second Jeep Wrangler for eight years, and a third Jeep Wrangler for nine years. I have recently switched to a Jeep Cherokee, which I've just about reached my first year driving. What if I bought network equipment like I buy cars? What sort of router was available nine years ago? That is 2008. I was still working at Cisco, and my lab, if I remember right, was made up of 7200's and 2600's. Younger engineers probably look at those model numbers and see completely different equipment than what I actually had; I doubt many readers of this blog ever deployed 7200's of the kind I had in my lab in their networks. Do I really want to run a network today on 9-year-old hardware? I don't see how the answer to that question can be "yes." Why? First, do you really know what hardware capacity you will need in ten years? Really? I doubt your business leaders can tell you what products they will be creating in ten years beyond a general description, nor can they tell you how large the company will be, who their competitors will be, or what shifts might occur in the competitive landscape. Hardware vendors try to get around this by building big chassis boxes and selling blades that will slide into them. But does this model really work? The Cisco 7500 was the current chassis box 9 years ago, I think — even if you could get blades for it today, would it meet your needs? Would you really want to pay the power and cooling for an old 7500 for 9 years because you didn't know if you would need one or seven slots nine years ago? Building a hardware platform for ten years of service in a world where two years is too far to predict is like rearranging the chairs on the Titanic. It's entertaining, perhaps, but it's pretty pointless entertainment. Second, why are we not taking the lessons of the compute and storage worlds into our thinking, and learning to scale out, rather than scaling up? We treat our routers like the server folks of yore — add another blade slot and make it go faster. Scale up makes your network do this — Do you see those grey areas? They are costing you money. Do you enjoy defenestrating money? These are symptoms of looking at the network as a bunch of wires and appliances, as hardware with a little side of software thrown in. What about the software? Well, it may be hard to believe, but pretty much every commercial operating system available for routers today is an updated version of software that was available ten years ago. Some, in fact, are more than twenty years old. We don't tend to see this because we deploy routers and switches as appliances, which means we treat the software as just another form of hardware. We might deploy ten to fifteen different operating systems in our network without thinking about it — something we would never do in our data centers, or on our desktop computers. So what this appliance-based way of looking at things emphasizes is this: buy enough hardware to last you ten years, and treat the software a fungible — software is a sec[...]



Spanish Police Raid the Offices of .cat gTLD Registry

2017-09-20T07:29:00-08:00

Photo posted by Fundació puntCAT‏ during the raid.The offices of the .cat gTLD registry Fundació puntCAT were raided by the Spanish police this morning. The company reported the incident via a series of tweets as the raid was being carried out. "Right now spanish police @guardiacivil is doing an intervention in our office @ICANN," was tweeted just about 4 hours ago followed by another tweet reporting that the police was headed to CTO's home. "We're wating for him to arrive to our office to start the intervention." Michele Neylon writes: "The move comes a couple of days after a Spanish court ordered the domain registry to take down all .cat domain names being used by the upcoming Catalan referendum. The .cat domain registry currently has over 100 thousand active domain names, and in light of the actions taken by the Spanish government, it's unclear how the registry will continue to operate if their offices are effectively shutdown by the Spanish authorities. The seizure won't impact live domain names or general day to day operations by registrars, as the registry backend is run by CORE and leverages global DNS infrastructure. However, it is deeply worrying that the Spanish government's actions would spill over onto an entire namespace." — Update – 20 SEP 2017: puntCAT's head of IT, Pep Masoliver, has been arrested as part of a Spanish government crackdown on pushes for independence, reports Kevin Murphy in Domain Incite: "He's been charged with 'sedition' and is still in police custody this evening… His arrest coincided with the military police raid of puntCAT's office in Barcelona that started this morning, related to a forthcoming Catalan independence referendum." — Fundació puntCAT releases statement: "The Fundació puntCAT wants to express its utmost condemnation, indignation and reprobation for the actions that it has been suffering lately with successive judicial mandates, searches and finally the arrest of our Director of Innovation and Information Systems, Pep Masoliver. ... The show that we have experienced in our offices this morning has been shameful and degrading, unworthy of a civilized country. We feel helpless in the face of these immensely disproportionate facts. We demand the immediate release of our colleague and friend." — Update 21 Sep 2017: EFF issues press letter condemning the police raid: "We have deep concerns about the use of the domain name system to censor content in general, even when such seizures are authorized by a court, as happened here. And there are two particular factors that compound those concerns in this case. First, the content in question here is essentially political speech, which the European Court of Human Rights has ruled as deserving of a higher level of protection than some other forms of speech. Even though the speech concerns a referendum that has been ruled illegal, the speech does not in itself pose any imminent threat to life or limb. The second factor that especially concerns us here is that the seizure took place with only 10 days remaining until the scheduled referendum, making it unlikely that the legality of the domains' seizures could be judicially reviewed before the referendum is scheduled to take place." Follow CircleID on TwitterMore under: Registry Services, Top-Level Domains [...]



The Madness of Broadband Speed Tests

2017-09-19T10:55:00-08:00

The broadband industry has falsely sold its customers on "speed", so unsurprisingly "speed tests" have become an insane and destructive benchmark. As a child, I would go to bed, and sometimes the garage door would swing open before I went to sleep. My father had come home early from the late shift, where he was a Licensed Aircraft Maintenance Engineer for British Airways. I would wait for him eagerly, and he would come upstairs, still smelling of kerosene and Swarfega, With me lying in bed, he would tell me tales of his work, and stories about the world. Just don't break the wings off as you board!Funnily enough, he never told me about British Airways breaking the wings off its aircraft. You see, he was involved in major maintenance checks on Boeing 747s. He joined BOAC in 1970 and stayed with the company for 34 years until retirement. Not once did he even hint at any desire for destructive testing for aircraft. Now, when a manufacturer makes a brand new airplane type, it does test them to destruction. Here's a picture I shamelessly nicked showing the Airbus A350 wing flex test. I can assure you, they don't do this in the British Airways hangars TBJ and TBK at Hatton Cross maintenance base at Heathrow. Instead, they have non-destructive testing using ultrasound and X-rays to look for cracks and defects. So what's this all got to do with broadband? Well, we're doing the equivalent of asking the customers to break the wings off every time they board. And even worse, our own engineers have adopted destructive testing over non-destructive testing! Because marketing departments at ISPs refuse to define what experience that actually intends to deliver (and what is unreasonable to expect), the network engineers are left with a single and simple marketing requirement: "make it better than it was". When you probe them on what this means, they shrug and tell you "well, we're selling all our products on peak speed, so we try to make the speed tests better". This, my friends, is bonkers. The first problem is that the end users are conducting a denial-of-service attack on themselves and their neighbours. A speed test deliberately saturates the network, placing it under maximum possible stress. The second problem is that ISPs themselves have adopted speed tests internally, so they are driving mad levels of cost carrying useless traffic designed to over-stress their network elements. Then to top it all, regulators are encouraging speed tests as a key metric, deploying huge numbers of boxes hammering the broadband infrastructure even in its most fragile peak hour. The proportion of traffic coming from speed tests is non-trivial. So what's the alternative? Easy! Instead of destructive testing, do non-destructive testing. We know how to X-ray a network, and the results are rather revealing. If you use the right metrics, you can also model the performance limits of any application from the measurements you take. Even a speed test! So you don't need to snap the wings off your broadband service every time you use it after all. I think I'll tell my daughters at their next bedtime. It's good life guidance. Although I can imagine my 14 year old dismissing it as another embarrassing fatherly gesture and uninteresting piece of parental advice. Sometimes it takes a while to appreciate our inherited wisdom. Written by Martin Geddes, Founder, Martin Geddes Consulting LtdFollow CircleID on TwitterMore under: Access Providers, Broadband, Telecom [...]



EFF Resigns from World Wide Web Consortium (W3C) over EME Decision

2017-09-19T07:36:00-08:00

In an open letter to the World Wide Web Consortium (W3C), the Electronic Frontier Foundation (EFF) announced on Tuesday that it is resigning from World Wide Web Consortium (W3C) in response to the organization publishing Encrypted Media Extensions (EME) as a standard. From the letter: "In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing "Encrypted Media Extensions," an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties. ... We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they'll be able to ensure no one ever subjects them to the same innovative pressures."

Follow CircleID on Twitter

More under: Cybersecurity, Policy & Regulation, Privacy, Web




Net Neutrality Advocates Planning Two Days of Protest in Washington DC

2017-09-18T09:53:00-08:00

A coalition of activists and consumer groups are planning to gather in Washington, DC to meet directly with the members of Congress, as they protest plans to defang regulations meant to protect an open internet.

The event organizer, Fight for the Future, is running a dedicated website 'battleforthenet.com/dc' in which it states in part: "On September 26-27 Internet users from across the country will converge on Washington, DC to meet directly with their members of Congress, which is by far the most effective way to influence their positions and counter the power of telecom lobbyists and campaign contributions. ... The only thing that can stop them is a coordinated grassroots effort of constituents directly pressuring our members of Congress, who have the power to stop the FCC and vote down bad legislation."

Participating organizations in the protest include Fight for the Future, Public Knowledge, EFF, Center for Media Justice, Common Cause, Consumers Union, Free Press and the Writers Guild of America West. See additional report by Dominic Rushe in The Guardian.

Follow CircleID on Twitter

More under: Net Neutrality, Policy & Regulation




Forty Percent of New Generic TLDs Shrinking, According to Domain Incite Analysis

2017-09-18T08:39:00-08:00

Forty percent of non-brand new gTLDs are shrinking, reports Kevin Murphy in Domain Incite: "According to numbers culled from registry reports, 172 of the 436 commercial gTLDs we looked at had fewer domains under management at the start of June than they did a year earlier. ... As you might expect, registries with the greatest exposure to the budget and/or Chinese markets were hardest hit over the period. .wang, .red, .ren, .science and .party all saw DUM decline by six figures. Another 27 gTLDs saw declines of over 10,000 names."

Follow CircleID on Twitter

More under: Domain Names, Registry Services, Top-Level Domains




Preliminary Thoughts on the Equifax Hack

2017-09-17T10:08:00-08:00

As you've undoubtedly heard, the Equifax credit reporting agency was hit by a major attack, exposing the personal data of 143 million Americans and many more people in other countries. There's been a lot of discussion of liability; as of a few days ago, at least 25 lawsuits had been filed, with the state of Massachusetts preparing its own suit. It's certainly too soon to draw any firm conclusions about who, if anyone, is at fault — we need more information, which may not be available until discovery during a lawsuit — but there are a number of interesting things we can glean from Equifax's latest statement. First and foremost, the attackers exploited a known bug in the open source Apache Struts package. A patch was available on March 6. Equifax says that their "Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." The obvious question is why this particular system was not patched. One possible answer is, of course, that patching is hard. Were they trying? What does "took efforts to identify and to patch" mean? Were the assorted development groups actively installing the patch and testing the resulting system? It turns out that this fix is difficult to install: You then have to hope that nothing is broken. If you're using Struts 2.3.5 then in theory Struts 2.3.32 won't break anything. In theory it's just bug fixes and security updates, because the major.minor version is unchanged. In theory. In practice, I think any developer going from 2.3.5 to 2.3.32 without a QA cycle is very brave, or very foolhardy, or some combination of the two. Sure, you'll have your unit tests (maybe), but you'll probably need to deploy into your QA environment and do some kind of integration testing too. That's assuming, of course, that you have a compatible QA environment within which you can deploy your old, possibly abandoned application. Were they trying hard enough, i.e., devoting enough resources to the problem? Ascertaining liability here — moral and/or legal — can't be done without seeing the email traffic between the security organization and the relevant development groups; you'd also have to see the activity logs (code changes, test runs, etc.) of these groups. Furthermore, if problems were found during testing, it might take quite a while to correct the code, especially if there were many Struts apps that needed to be fixed. As hard as patching and testing are, though, when there are active exploitations going on you have to take the risk and patch immediately. That was the case with this vulnerability. Did the Security group know about the active attacks or not? If they didn't, they probably aren't paying enough attention to important information sources. Again, this is information we're only likely to learn through discovery. If they did know, why didn't they order a flash-patch? Did they even know which systems were vulnerable? Put another way, did they have access to a comprehensive database of hardware and software systems in the company? They need one — there are all sorts of other things you can't do easily without such a database. Companies that don't invest up front in their IT infrastructure will hurt in many other ways, too. Equifax has a market capitalization of more than $17 billion; they don't really have an excuse for not running a good IT shop. It may be, of course, that Equifax knew all of that and still chose to leave the vulnerable servers up. Why? Apparently, the vulnerable machine was their "U.S. online dispute portal". I'm pretty certain that they're required by law to have a dispute mechanism, and while it probably doesn't have to be a website (and some people suggest that complainant[...]



China to Create National Cyberattack Database

2017-09-15T13:43:00-08:00

China has revealed plans to create a national data repository for information on cyberattacks and will require telecom firms, internet companies and domain name service providers to report threats to it. Reuters reports: "The Ministry of Industry and Information Technology (MIIT) said companies and telcos as well as government bodies must share information on incidents including Trojan malware, hardware vulnerabilities, and content linked to "malicious" IP addresses to the new platform. An MIIT policy note also said that the ministry, which is creating the platform, will be liable for disposing of threats under the new rules, which will take effect on Jan. 1."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, Policy & Regulation, Registry Services, Telecom




Bluetooth-Based Attack Vector Dubbed "BlueBorne" Exposes Almost Every Connected Device

2017-09-15T13:30:00-08:00

New discovery of a set of zero-day Bluetooth-related vulnerabilities can affect billions of devices in use today. Security firm, Armis Labs, has revealed a new attack vector that can target major mobile, desktop, and IoT operating systems, including Android, iOS, Windows, and Linux, and the devices using them. The new vector named "BlueBorne", as it spread through the air (airborne) and attacks devices via Bluetooth.

No pairing required: "BlueBorne is an attack vector by which hackers can leverage Bluetooth connections to penetrate and take complete control over targeted devices. BlueBorne affects ordinary computers, mobile phones, and the expanding realm of IoT devices. The attack does not require the targeted device to be paired to the attacker's device, or even to be set on discoverable mode."

— "The BlueBorne attack vector has several qualities which can have a devastating effect when combined. By spreading through the air, BlueBorne targets the weakest spot in the networks' defense — and the only one that no security measure protects. Spreading from device to device through the air also makes BlueBorne highly infectious. Moreover, since the Bluetooth process has high privileges on all operating systems, exploiting it provides virtually full control over the device."

Vulnerabilities found in Android, Microsoft, Linux and iOS versions pre-iOS 10. "Armis reported the vulnerabilities to Google, Microsoft, and the Linux community. Google and Microsoft are releasing updates and patches on Tuesday, September 12. Others are preparing patches that are in various stages of being released."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware, Mobile Internet, Wireless




U.S. Navy Investigating Possibility of Cyberattack Behind Two Navy Destroyer Collisions

2017-09-15T12:53:00-08:00

(image)

Deputy chief of naval operations for information warfare, Vice Adm. Jan Tigh, says the military is investigating the possibility of compromised computer systems behind two U.S. Navy destroyer collisions with merchant vessels that occurred in recent months. Elias Groll reporting in Foreign Policy: "Naval investigators are scrambling to determine the causes of the mishaps, including whether hackers infiltrated the computer systems of the USS John S. McCain ahead of the collision on Aug. 21, Tighe said during an appearance at the Center for Strategic and International Studies in Washington… he Navy has no indication that a cyberattack was behind either of the incidents, but it is dispatching investigators to the McCain to put those questions to rest, she said."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity




In Response to 'Networking Vendors Are Only Good for the Free Lunch'

2017-09-14T15:39:00-08:00

I ran into an article over at the Register this week which painted the entire networking industry, from vendors to standards bodies, with a rather broad brush. While there are true bits and pieces in the piece, some balance seems to be in order. The article recaps a presentation by Peyton Koran at Electronic Arts (I suspect the Register spiced things up a little for effect); the line of argument seems to run something like this — Vendors are only paying attention to larger customers, and/or a large group of customers asking for the same thing; if you are not in either group, then you get no service from any vendor Vendors further bake secret sauce into their hardware, making it impossible to get what you want from your network without buying from them Standards bodies are too slow, and hence useless People are working around this, and getting to the inter-operable networks they really want, by moving to the cloud There is another way: just treat your networking gear like servers, and write your own protocols--after all you probably already have programmers on staff who know how to do this Let's think about these a little more deeply. Vendors only pay attention to big customers and/or big markets. – Ummm… Yes. I do not know of any company that does anything different here, including the Register itself. If you can find a company that actually seeks the smallest market, please tell me about them, so I can avoid their products, as they are very likely to go out of business in the near future. So this is true, but it is just a part of the real world. Vendors bake secret sauce into their hardware to increase their profits. – Well, again… Yes. And how is any game vendor any different, for instance? Or what about an online shop that sells content? Okay, next. Standards bodies are too slow, and hence useless. – Whenever I hear this complaint, I wonder if the person making the complaint has actually ever built a real live running system, or a real live deployed standard that provides interoperability across a lot of different vendors, open source projects, etc. Yes, it often seems silly how long it takes for the IETF to ratify something as a standard. But have you ever considered how many times things are widely implemented and deployed before there is a standard? Have you ever really looked at the way standards bodies work to understand that there are many different kinds of standards, each of which with a different meaning, and that not everything needs to be the absolute tip top rung on the standards ladder to be useful? Have you ever asked how long it takes to build anything large and complicated? I guess we could say the entire open source community is slow and useless because it took many years for even the Linux operating system to be widely deployed, and to solve a lot of problems. Look, I know the IETF is slow. And I know the IETF has a lot more politics than it should. I live both of those things. But I also know the fastest answer is not always the right answer, and throwing away decades of experience in designing protocols that actually work is a pretty dumb idea — unless you really just want to reinvent the wheel every time you need to build a car. In the next couple of sentences, we suddenly find that someone needs to call out the contradiction police, replete in their bright yellow suits and funny hats. Because now it seems people want inter-operable networks without standards bodies! Let make a simple point here many people just do not seem to realize: You cannot have interoperability across multiple vendors and multiple open source projects, without some forum where they can all dis[...]



Abusive and Malicious Registrations of Domain Names

2017-09-14T07:43:00-08:00

When ICANN implemented the Uniform Domain Name Dispute Resolution Policy (UDRP) in 1999, it explained its purpose as combating "abusive registrations" of domain names which it defined as registrations "made with bad-faith intent to profit commercially from others' trademarks (e.g., cybersquatting and cyberpiracy)." (The full statement can be found in the Second Staff Report on Implementation Documents for the Uniform Dispute Resolution Policy, Paragraph 4.1(c)). Bad actors employ a palette of stratagems, such as combining marks with generic qualifiers, truncating or varying marks or by removing, reversing, and rearranging letters within the second level domain (typosquatting). They are costly to police and likelier even more costly to maintain forfeited domain names, but for all the pain they inflict they are essentially plain vanilla irritants. While these kinds of disputes essentially dominate the UDRP docket, there has been an increase in the number of disputes involving malicious registrations. The first instances of "phishing" and "spoofing" appear in a 2005 case, CareerBuilder, LLC v. Stephen Baker, D2005-0251 (WIPO May 6, 2005) in which the Panel found that the "disputed domain name is being used as part of a phishing attack (i.e., using 'spoofed' e-mails and a fraudulent website designed to fool recipients into divulging personal financial data such as credit card numbers, account usernames and passwords, social security numbers, etc.") The quainter forms of abuse are registrants looking to pluck lower hanging fruit. They are so obviously opportunistic respondents don't even bother to appear (they also don't appear with the malicious cases, but for another reason, to avoid identity). The plain vanilla type is represented by such cases as Guess? IP Holder L.P. and Guess? Inc. v. Domain Admin: Damon Nelson — Manager, Quantec LLC, Novo Point LLC, D2017-1350 (WIPO August 24, 2017) () in which Complainant's product line includes "accessories." In these types of cases, respondents are essentially looking for visitors. In contrast, malicious registrations are of the kind described, for example, in Google Inc. v. 1&1 Internet Limited, FA1708001742725 (Forum August 31, 2017) ( in which respondent used the complainant's mark and logo on a resolving website containing offers for technical support and password recovery services, and soliciting Internet users' personal information). . . . Complainant's exhibit 11 displays a malware message displayed on the webpage, which Complainant claims indicates fraudulent conduct. Malicious registrations are a step up in that they introduce a new, more disturbing, and even criminal element into the cyber marketplace. Respondents are not just looking for visitors, they are targeting brands for victims. Their bad faith is more than "profit[ing] commercially from others' trademarks" but operating websites (or using e-mails) as trojan horses. It aligns registrations actionable under the UDRP with conduct policed and prosecuted by governments. The UDRP, then, is not just a "rights protection mechanism." The term "abusive registration" has enlarged in meaning (and, thus, in jurisdiction) to include malicious conduct generally. Total security is a pipe dream. ICANN has working groups devoted to mapping the problem, and there are analytical studies assessing its extent in legacy and new TLDs. Some idea of the magnitude is seen in "Statistical Analysis of DNS Abuse in gTLDs Final Report” commissioned by an ICANN mandated review team, the Competition, Consumer Trust and Consumer Choice Review Team (CCTRT). Incid[...]



Can Constellations of Internet Routing Satellites Compete With Long-Distance Terrestrial Cables?

2017-09-13T14:16:00-08:00

The goal will be to have the majority of long distance traffic go over this network. —Elon Musk Three companies, SpaceX, OneWeb, and Boeing are working on constellations of low-Earth orbiting satellites to provide Internet connectivity. While all three may be thinking of competing with long, terrestrial cables, SpaceX CEO Elon Musk said "the goal will be to have the majority of long-distance traffic go over this (satellite) network" at the opening of SpaceX's Seattle office in 2015 (video below). SpaceX orbital path schematic, sourceCan he pull that off? Their first constellation will consist of 4,425 satellites operating in 83 orbital planes at altitudes ranging from 1,110 to 1,325 km. They plan to launch a prototype satellite before the end of this year and a second one during the early months of 2018. They will start launching operational satellites in 2019 and will complete the first constellation by 2024. The satellites will use radios to communicate with ground stations, but links between the satellites will be optical. At an altitude of 1,110 kilometers, the distance to the horizon is 3,923 kilometers. That says each satellite will have a line-of-sight view of all other satellites that are within 7,846 kilometers, forming an immense mesh network. Terrestrial networks are not so richly interconnected and cables must zig-zag around continents and islands if undersea and other obstructions if under ground. Latency in a super-mesh of long, straight-line links should be much lower than with terrestrial cable. Additionally, Musk says the speed of light in a vacuum is 40-50 percent faster than in a cable, cutting latency further. Let's look at an example. I traced the route from my home in Los Angeles to the University of Magallanes in Punta Arenas at the southern tip of Chile. As shown here, the terrestrial route was 14 hops and the theoretical satellite link only five hops. (The figure is drawn roughly to scale). So, we have 5 low-latency links versus 14 higher-latency links. The gap may close somewhat as cable technology improves, but it seems that Musk may be onto something. Check out the following video of the speech Musk gave at the opening of SpaceX's Seattle office. His comments about the long-distance connections discussed here come at the three-minute mark, but I'd advise you to watch the entire 26-minute speech: style="margin-bottom:15px;" width="644" height="362" src="https://www.youtube.com/embed/AHeZHyOnsm4?rel=0" frameborder="0" allowfullscreen> Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers, Broadband, Telecom, Wireless [...]



Innovative Solutions for Farming Emerge at the Apps for Ag Hackathon

2017-09-13T09:16:00-08:00

Too often, people consider themselves passive consumers of the Internet. The apps and websites we visit are made by people with technical expertise using languages we don't understand. It's hard to know how to plug in, even if you have a great idea to contribute. One solution for this problem is the hackathon. Entering the Hackathon Arena For the uninitiated, a hackathon is a place of hyper-productivity. A group of people converge for a set period of time, generally a weekend to build solutions to specific problems. Often, the hackathon has an overall goal, like the Sacramento Apps for Ag hackathon. "The Apps for Ag Hackathon was created to bring farmers, technologists, students and others from the agriculture and technology industries together in a vibrant, focused environment to create the seeds of new solutions for farmers using technology," says Gabriel Youtsey, Chief Innovation Officer, Agriculture and Natural Resources. Now in its fourth year, the hackathon was bigger than ever and was held at The Urban Hive in Sacramento, with the pitch presentations taking place during the California State Fair. The event kicked off on Friday evening, with perspectives from a farmer on the challenges for agriculture in California, including labor, water supply, food safety, and pests, and how technology can help solve them. Hackathon participants also had opportunities to get up and talk about their own ideas for apps or other technology-related concepts to solve food and agriculture problems for farmers. From there, teams freely formed based on people's skills and inclinations. Although the hackathon is competitive, there is a great deal of collaboration happening, as people hash out ideas together. The hackathon itself provides tools and direction, and experts provide valuable advice and mentorship. At the end of the event, the teams presented working models of their apps and a slide deck to describe the business plan. Judges then decided who got to go home with the prizes, which often include support like office space, cash, and cloud dollars so that developers can keep building their software. For Entrepreneurs, Newbies, and Techies Alike In late July of this year, three people with very different career backgrounds entered the Apps for Ag Hackathon to dedicate their weekend to building a piece of software. They all walked away with a top prize and a renewed commitment to reimagining how technology can contribute to agriculture and food production. In the room was Sreejumon Kundilepurayil, a hackathon veteran who has worked for tech giants building mobile and software solutions, Scott Kirkland, a UC Davis software developer and gardener, and Heather Lee, a self-described generalist in business and agritourist enthusiast. "I was terrified," Lee shared. "I'm tech capable — I've taken some coding classes — but I had no idea what my role would be. I decided to go and put myself in an uncomfortable position. When I got there, I realized that telling a story was my role." While her team members were mapping out the API and back-end development, Lee was working on the copy, graphics, video, and brand guide. Her idea for a mobile app that connects farmers and tourists for unique day-trips to farms ended up winning third place. First place went to Kundilepurayul and Vidya Kannoly for an app called Dr Green, which will help gardeners and farmers diagnose plant diseases using artificial intelligence and machine learning. Initially built for the Californian market, it will eventually be available globally as the machine gets more and more adept at i[...]



Amazon's Letter to ICANN Board: It's Time to Approve Our Applications for .AMAZON TLDs

2017-09-12T14:54:00-08:00

When ICANN launched the new gTLD program five years ago, Amazon eagerly joined the process, applying for .AMAZON and its Chinese and Japanese translations, among many others. Our mission was — and is — simple and singular: We want to innovate on behalf of our customers through the DNS. ICANN evaluated our applications according to the community-developed Applicant Guidebook in 2012; they achieved perfect scores. Importantly, ICANN's Geographic Names Panel determined that "AMAZON" is not a geographic name that is prohibited or one that requires governmental approval. We sincerely appreciate the care with which ICANN itself made these determinations, and are hopeful that a full approval of our applications is forthcoming. In a letter we sent to the ICANN Board on September 7, 2017 (the full text of which may be found below), we laid out the reasons for why our applications should be swiftly approved now that an Independent Review Process (IRP) panel found in our favor. Our letter highlights the proactive engagement we attempted with the governments of the Amazonia region over a five year period to alleviate any concerns about using .AMAZON for our business purposes. First, we have worked to ensure that the governments of Brazil and Peru understand we will not use the TLDs in a confusing manner. We proposed to support a future gTLD to represent the region using the geographic terms of the regions, including .AMAZONIA, .AMAZONICA or .AMAZONAS. We also offered to reserve for the relevant governments certain domain names that could cause confusion or touch on national sensitivities. During the course of numerous formal and informal engagements, we repeatedly expressed our interest in finding an agreed-upon outcome. And while the governments have declined these offers, we stand by our binding commitment from our July 4, 2013 Public Interest Commitment (PIC) to the .AMAZON applications, which stated that we will limit registration of culturally sensitive terms — engaging in regular conversations with the relevant governments to identify these terms — and formalizing the fact that we will not object to any future applications of .AMAZONAS, .AMAZONIA and .AMAZONICA. We continue to believe it is possible to use .AMAZON for our business purposes while respecting the people, culture, history, and ecology of the Amazonia region. We appreciate the ICANN Board's careful deliberation of our applications and the IRP decision. But as our letter states, approval of our .AMAZON applications by the ICANN Board is the only decision that is consistent with the bottom-up, multistakeholder rules that govern ICANN and the new gTLD program. We urge the ICANN Board to now approve our applications. An ICANN accountable to the global multistakeholder community must do no less. The full text of our letter is below. * * * Dear Chairman Crocker and Members of the ICANN Board of Directors: We write as the ICANN Board considers the July 10, 2017 Final Declaration of the Independent Review Process Panel (IRP) in Amazon EU S.à.r.l. v. ICANN regarding the .AMAZON Applications. Because the Panel concluded that the Board acted in a manner inconsistent with its Bylaws, we ask the Board to immediately approve our long-pending .AMAZON Applications. Such action is necessary because there is no sovereign right under international or national law to the name "Amazon," because there are no well-founded and substantiated public policy reasons to block our Applications, because we are committed to using the TLDs in a respectful manner, and bec[...]



CE Router Certification Opens Up the Last Mile to IPv6 Fixed-Line

2017-09-12T08:08:00-08:00

With reference to IPv6, probably most end users might not have any sense of it. The mainstream parlance in the industry is that network carriers and content and service providers stick to their own arguments. Carriers believe owing to the lack of IPv6 content and service, the demand for IPv6 from the users is very small. The content and service providers hold that users cannot have access to content and service through IPv6 and that why they should provide the service in this background. Dr. Song Linjian of CFIEC stated in the article China, towards fully-connected IPv6 networks that Chicken and Egg paradox between IPv6 networks and content is just temporary and that it surely exists but not the key reason. China has already prepared itself. When the last mile problem is solved, the users will fully explode. Long ago, every telecom carrier started to strictly implement the network device procurement requirements that network devices must support IPv6 such as the IPv6 Ready Logo testing and certificating which can satisfy this requirement. However, the CE (home gateways and wireless routers, etc.) purchased by users themselves mostly do not support IPv6, which caused the last mile problem. “When IPv6 is still burgeoning, it is hard to require the vendors and users to have the devices with IPv6-enabled and IPv6-certified. The enterprises produce mature CE Routers (Customer Edge Router, home gateway routers)that support IPv6 do not launch their products to the Chinese market in that customers do not have demand for IPv6. This has become the narrowest bottleneck that hinders the development of IPv6 fixed line users.” said the Director of the BII-SDNCTC Li Zhen with reference to the fixed line IPv6 development. In the upcoming era of IoT, more and more devices need to be connected, and the home gateway CE routers, as the switch center of home network information and data, needs full support for IPv6. From another perspective, it can also be seen that the home gateways have won enough attention to IPv6. On March 19th 2014, international IPv6 organization IPv6 Forum and IPv6 Ready Logo committee officially announced the initiation of the IPv6 Ready CE Router Logo conformance and interoperability testing and certificating program, which marks the full support from brand-new CE Router certificating program of next generation Ipv6 deployment and commercialization. According to the statistics from IPv6 Forum, at present, there are 3000 network devices that passed the Ipv6 Ready certification. The rate of supporting IPv6 is very high. But when it comes to the home gateway CE devices, the next CE scaling testing program CE Router under the framework of IPv6 Ready Logo, only 17 devices from US Netgear, ZTE, Broadcom, etc. have passed IPv6 Ready Logo certification. As the key to access to the last mile of IPv6 in the households, the Chinese market for routing devices bears great potential. The CE Router certified devices will have stronger competitive edge to take hold of vantage ground in the next generation network deployment and commercialization. According to the Global IPv6 Testing Center, the devices to be certified by CE Router Logo are the smart home gateways, such as the home routers, wireless routers, GPON&EPON end devices, etc. The testing content covers the core protocols (Phase-2 enhanced certificating), all the tests in DHCPv6 and RFC084. Compared to other certifications (Core, DHCPv6, Ipsecv6, SNMPv6), the certification is highly targeted at devices and much stricter. In the future, more[...]