Subscribe: CircleID
http://www.circleid.com/rss/rss_all/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
circleid twittermore  ddos  fcc  fiber  google  information  internet  network  networks  new  security  service  system  team 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2016-12-09T12:56:00-08:00

 



A Three Minute Guide to Network Automation Bliss

2016-12-09T12:56:00-08:00

The cloud computing paradigm has been making steady progress in 2016. With the DevOps model making its way from cloud to networking, the business upside of fully automated service architectures is finally beginning to materialize. The associated service agility is expected to unleash new business models that transform the ways in which applications and connectivity can be consumed. To facilitate these new revenue opportunities, service providers are now focusing on techniques such as service abstraction and network service chaining. The idea is that once the networks have been abstracted into virtualized templates, it becomes easy to deploy and to connect networks in real time as required by different services. While these methods are certainly foundational for network automation, there is at least one more challenge that is yet to dawn on most industry participants. A high percentage of business-to-business activities in service provisioning are associated with operating Wide Area Networks (WAN). Essentially, WANs are the private enterprise networks that function as the backbone of the digital enterprise connecting data centers, machine networks and various company sites via trusted private networks. Safe consumption of online resources within the private network requires seamless connectivity between the next-generation services and the existing WANs. While network service chaining and service abstraction allow new networks to be deployed immediately, they do not provide a mechanism for connecting these resources with the existing enterprise networks. A key problem here is that accurate network information is difficult to come by, as larger organizations tend to scatter network information across multiple silos. Typically, an internal network registry team would manage the networks used by the service provider itself, whereas the WAN-related information would be managed by several IP architects in various account teams. The practical implication of this siloed structure is that service abstraction and network service chaining is not enough when new services are being launched. As the network information is scattered across the organization, it can easily take weeks or even months before the correct information is found and the new services can be connected. This destroys most automation benefits such as scalability and time-to-market, making profitable on-demand enterprise services a distant dream. To overcome this problem, here's a three-step guide for putting one's house in order: Organize all existing network information into a single authoritative management system, and establish an automated mechanism for retrieving information from the software-defined world. Develop a security mechanism that provides the appropriate Authorization, Authentication and Accounting (AAA) for the set of confidential network data. Implement a unified Application Programming Interface (API) that allows networks and network-related data to be accessed by third-party systems and orchestrators. As network automation gains momentum, an increasing percentage of organizations have made network abstraction and network service chaining a priority. Yet the lack of organization in basic network management will keep the on-demand enterprise services in a bind. The power of network automation can be unleashed by deploying an authoritative network management solution. Written by Juha Holkkola, CEO of FusionLayer, Inc.Follow CircleID on TwitterMore under: Access Providers, Broadband, Cloud Computing, Data Center, Internet of Things, IP Addressing, IPv6, Telecom [...]



Internet Governante Forum Puts the Spotlight on Trade Agreements

2016-12-09T12:22:00-08:00

width="644" height="362" src="https://www.youtube.com/embed/bol8ozrKk_c?rel=0&showinfo=0" frameborder="0" allowfullscreen style="margin-bottom:7px;">IGF 2016 / PLENARY – Trade Agreements and the Internet"This year was the first year in which the spotlight fell on the use of trade agreements to make rules for the Internet behind closed doors, and a broad consensus emerged that this needs to change," Jeremy Malcolm reporting today from EFF. "The Internet Governance Forum (IGF) is a multi-stakeholder community that discusses a broad range of Internet issues, and seeks to identify possible shared solutions to current challenges. ... In an unprecedented focus on this issue, there were three separate workshops held on the topic — an EFF-organized workshop on the disconnect between trade agreements and the Internet's multi-stakeholder governance model, two more specific workshops on the Trans-Pacific Partnership (TPP) and on the Trade in Services Agreement (TISA), and finally a high-profile plenary session that was translated into the six United Nations languages and included on its panel two former trade negotiators, a Member of the European Parliament, and two private sector representatives, as well as speakers from EFF and Public Citizen." — Internet Infrastructure Coalition's David Snead: "I think if you look at the recent history of trade negotiations, we have this long string of failed trade agreements, and trade agreements that have been really vehemently opposed by a number of people, the last of which is TPP. What does that indicate to me? It indicates to me that as someone who believes very deeply in the potential for free trade and the fact that free trade is good, that the system isn't working. If we can't get people behind the trade agreements, if we have people in the streets opposing the trade agreements, we need to find a better way to address their concerns, and for me the primary issue is one of secrecy. I think we've gone way overboard in classifying trade agreements and trade agreement texts, and there need to be methods for opening those up." — Malcolm: "The attention now being given to trade at this important global forum comes not a moment too soon, as the intense push to ram Internet issues into international law through the TPP and TISA that we saw this year won't be dampened for long by the failure of the TPP." Follow CircleID on TwitterMore under: Internet Governance, Policy & Regulation [...]



AT&T CEO Confident Trump-Appointed FCC Will Scarp Net Neutrality Regulations

2016-12-08T12:55:00-08:00

AT&T's regulatory problems are melting away as the inauguration of President-elect Donald Trump draws near. Aaron Pressman reporting in Fortune: "AT&T CEO Randall Stephenson says he expects a Trump-appointed FCC won't push the net neutrality issue. And a Trump-led Department of Justice antitrust review of the Time Warner deal should lead to approval, he said. Pointing to the three people that Trump has appointed to his transition team overseeing the FCC, Stephenson said the agency should be much more amenable to industry desires."

Follow CircleID on Twitter

More under: Access Providers, Net Neutrality, Policy & Regulation




All About the Copyright Office's New DMCA System

2016-12-08T07:46:00-08:00

Website publishers that want to protect themselves against claims of copyright infringement must participate in a new online registration system created by the U.S. Copyright Office for the Digital Millennium Copyright Act ("DMCA") — even if they have participated previously. The new program, launched on December 1, 2016, offers a mandatory online registration system for the DMCA that replaces the original (and clunky) "interim" designation system, which was created in 1998. The purpose of the new, online DMCA agent-appointment process is the same as the previous system: to provide a safe harbor (under section 512 of the DMCA) from copyright infringement for so-called "service providers," which the U.S. Copyright Office notes includes "those that allow users to post or store material on their systems, and search engines, directories, and other information location tools." Under the DMCA, a service provider that appoints an agent to receive "take-down" notices may avoid liability for infringing content if it responds appropriately — typically by "expeditiously" removing access to the infringing content. To avail itself of the DMCA protection, a website publisher must designate, at the U.S. Copyright Office, an agent to receive notices of claimed infringement. In the past, this designation occurred by completing a hard-copy document, physically signing it and mailing it with a $135 check to the Register of Copyrights. Then, a scanned copy of the document was published on the Copyright Office's website. While this system worked, it was awkward and expensive because it relied on old-fashioned technology. Among its other drawbacks, the content of DMCA agent forms was not fully searchable (which made it difficult for copyright owners to use) and not easily maintained (which made it difficult for service providers to keep current). Indeed, one study performed by the Copyright Office showed that 70% of paper designations either had inaccurate information or were for defunct service providers. As the Final Rule for the new process stated: These findings are particularly concerning because they show that service providers might unwittingly be losing the protection of the safe harbors in section 512 [of the DMCA] by forgetting to maintain complete, accurate, and up-to-date information with the Copyright Office. These findings are also concerning because the directory in many cases would seem to be an unreliable resource, at best, to identify or obtain contact information for a particular service provider's designated agent. Clearly, as I've written before, the old system was outdated and in need of modernization. The new system replaces the paper process with an online tool, a $6 fee and a searchable database of agents. The Copyright Office has provided video tutorials and a list of frequently asked questions (FAQ) about the new DMCA agent-designation system. Two Things to Know Aside from the mechanics of the new system, perhaps the two most important things to know are: Website publishers must appoint an agent under the new system to keep (or obtain) their DMCA protection. As the Copyright Office says: "Any service provider that has previously designated an agent with the Office will have until December 31, 2017 to submit a new designation electronically through the new online registration system. Until that time, an accurate designation in the old paper-generated directory will continue to satisfy the service provider's obligations under the DMCA. After that time, the designation will expire and become invalid, which could result in the service provider losing the section 512 safe harbor protections." Website publishers must renew their appointments every three years to maintain their protection under the DMCA. According to the the Copyright Office: "A service provider's designation will expire and become invalid three years after it is registered with the Office, unless the service provider renews such designation by[...]



Sledgehammer DDoS Gamification and Future Bugbounty Integration

2016-12-07T14:33:00-08:00

Monetization of DDoS attacks has been core to online crime way before the term cybercrime was ever coined. For the first half of the Internet's life, DDoS was primarily a mechanism to extort money from targeted organizations. As with just about every Internet threat over time, it has evolved and broadened in scope and objectives. The new report by Forcepoint Security Labs covering their investigation of the Sledgehammer gamification of DDoS attacks is a beautiful example of that evolution. Their analysis paper walks through both the malware agents and the scoreboard/leaderboard mechanics of a Turkish DDoS collaboration program (named Sath-ı Müdafaa or "Surface Defense") behind a group that has targeted organizations with political ties deemed inconsistent with Turkey's current government. In this most recent example of DDoS threat evolution, a pool of hackers is encouraged to join a collective of hackers targeting the websites of perceived enemies of Turkey's political establishment. Using the DDoS agent "Balyoz" (the Turkish word for "sledgehammer"), members of the collective are tasked with attacking a predefined list of target sites — but can suggest new sites if they so wish. In parallel, a scoreboard tracks participants use of the Balyoz attack tool — allocating points that can be redeemed against acquiring a stand-alone version of the DDoS tool and other revenue-generating cybercrime tools, for every ten minutes of attack they conducted. As is traditional in the dog-eat-dog world of cybercrime, there are several omissions that the organizers behind the gamification of the attacks failed to pass on to the participants — such as the backdoor built into the malware they're using. Back in 2010, I wrote the detailed paper "Understanding the Modern DDoS Threat” and defined three categories of attackers — Professional, Gamerz, and Opt-in. This new DDoS threat appears to meld the Professional and Opt-in categories into a single political and money-making venture. Not a surprise evolutionary step, but certainly an unwanted one. If it's taken six years of DDoS cybercrime evolution to get to this hybrid gamification, what else can we expect? In that same period of time we've seen ad hoc website hacking move from an ignored threat to forcing a public disclosure discourse, to acknowledgment of discovery and remediation, and on to commercial bug bounty platforms. The bug bounty platforms (such as Bugcrowd, HackerOne, Vulbox, etc.) have successfully gamified the low-end business of website vulnerability discovery — where bug hunters and security researchers around the world compete for premium rewards. Is it not a logical step that DDoS also make the transition to the commercial world? Several legitimate organizations provide "DDoS Resilience Testing" services. Typically, through the use of software bots they spin up within the public cloud infrastructure, DDoS-like attacks are launched at paying customers. The objectives of such an attack include the measurement and verification of the defensive capabilities of the target's infrastructure to DDoS attacks, to exercise and test the companies "blue team" response, and to wargame business continuity plans. If we were to apply the principles of bug bounty programs to gamifying the commercial delivery of DDoS attacks, rather than a contrived limited-scope public cloud imitation, we'd likely have much more realistic testing capability — benefiting all participants. I wonder who'll be the first organization to master scoreboard construction and incentivisation? I think the new bug bounty companies are agile enough and likely have the collective community following needed to reap the financial rewards of the next DDoS evolutionary step. Written by Gunter Ollmann, Chief Security Officer at VectraFollow CircleID on TwitterMore under: Cyberattack, Cybercrime, DDoS [...]



Where is the Standard 'Socket' for Broadband?

2016-12-07T12:45:00-08:00

When you plug into a broadband socket, what you are accessing is a distributed computing service that supplies information exchange. What is the service description and interface definition? For inspiration, we can look at the UK power plug. One of the great unsung fit-for-purpose innovations in British society is the BS1363 13 ampere power plug and socket. This is superior to other plugs by virtue of its solid construction and safe design. Firstly, the three square prongs make for excellent electrical contact. It is practically impossible to wobble the plug to cause sparks or intermittent connectivity. The 'success mode' of clean, continuous power is fully covered off. But that's not all. When the earth prong goes into the socket, it opens up shutters than reveal the live power. Small children can't put sticky fingers in the socket, to the occasional regret of a frustrated parent of a screaming toddler. Yanking on the cord also does not easily apply undue force to the electrical components causing a dangerous fracture. Another great thing about a British plug is the fuse. If there is too much demand, then it cuts out, rather than going on fire. So as a design, the 'failure modes' are also well covered off. When you stand in the store to buy an electrical appliance, it is easy to tell what the rated demand is in terms of volts and amps. The capability of the supply is also clear, both for the whole dwelling, as well as in-building distribution like multi-socket power strips. You know your cooker needs a special supply, and that you can't power your tumble dryer off an AA battery soldered onto a socket. In summary, a fit-for-purpose interface between supply and demand does three things: it enables 'success' for specific uses; it sufficiently limits 'failure' for those uses; and it clearly communicates what uses it is suitable for to the buyer. What is missing in broadband is the conceptual equivalent of the standardised plug and socket. The interface between demand and supply is defined at an electrical level, but the overall service of information exchange is (mostly) undefined. As a result we are left with two less than satisfactory approaches to service delivery. One technical approach is how we use 'over the top' applications like iPlayer today. It is as if we leave an unshielded live information 'virtual cable' exposed directly to end users. 'Success modes' are enabled, since many applications work some of the time, but the constraint on their 'failure modes' is weak. In this model, users are not sufficiently 'insulated' from one another. Performance 'brown outs' from overload are common, as our example with video sign language demonstrates. As your children come home from school and go online, the performance of your important work application tangibly plummets. Alternatively, we have vertically integrated network services, more like how traditional landline phone calls or cable TV work. The information 'virtual cable' from the appliance is 'hard-wired' into the wall, and it can't be switched over. Whilst performance is predictable, and the service is usually fit-for-purpose, it is a highly inflexible approach. The price of constraining the 'failure mode' is a severe limit on the number of 'success modes'. Vertical integration reduces consumer choice, with a high cost for any services delivered. The need to 'insulate' the application from other uses may even result in a complete parallel infrastructure, as we have created in the UK for smart meters, at a cost of billions of pounds. The resources spent on special-purpose smart meter connectivity could have delivered an enormous improvement in the general-purpose infrastructure useful for transport, healthcare and emergency services. We certainly can't afford to build duplicate infrastructures for every industry and application whose needs diverge even slightly from basic Internet access. To break free [...]



Internet Society Urges for Increased Effort to Address Unprecedented Challenges Facing the Internet

2016-12-07T10:50:00-08:00

The Internet Society urged the global Internet community to redouble its efforts in addressing the wave of unprecedented challenges facing the Internet during the 11th Internet Governance Forum (IGF), a United Nations-convened conference taking place in Mexico, 6-9 December. From a press release issued today in Guadalajara, Mexico: "With just under half of the global population expected to be online by the end of 2016, Internet growth rates are slowing, resulting in a deepening digital divide between those with access and those without. Deploying infrastructure, increasing usability and ensuring affordability are critical for expanding Internet access and globally eliminating divisions in society, as are the policy frameworks to enable this. In addition, issues such as blocking of content, privacy, mass surveillance, cybercrime, hacking, and fake news are all contributing to what is now a growing global erosion of trust amongst users."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Censorship, Cyberattack, Cybercrime, Internet Governance, Internet of Things, Policy & Regulation, Security




From ICANN57 Hyderabad to the 3rd WIC Wuzhen Summit: A Moment of Consensus on Internet Governance

2016-12-06T20:12:00-08:00

Two events that happened last month deserve an additional note. One is the ICANN57 conference held in Hyderabad on November 3-9. The other is the 3rd World Internet Conference Wuzhen Summit held in Zhejiang Province on November 16-18. Though being completely overwhelmed by the result of President election in the United States, both events mark the victory of non-state actors and serve as good news for the community. ICANN57 as the first post-transition meeting celebrated an exit from U.S. government. The removal of this unique role not only encouraged conference participation but also significantly promoted global understanding of ICANN. While each country has its own version of misunderstanding about ICANN, the major Chinese misunderstanding had centered on the word privatization. ICANN bylaws state clearly that the non-profit public-benefit corporation is rooted in private sector and private sector refers to "business stakeholders, civil society, the technical community, academia, and end users". Very few Chinese readers, however, read the bylaws. Chinese public heavily relied on the media to learn about ICANN and its model of governance. Against this context, the word privatization caused most disputes: it was misunderstood as commercialization. A hearing of U.S. House of Representatives on March 17, for example, was entitled "Privatizing the IANA". It is not surprising therefore that quite some academic figures in China said that the transition of stewardship was not the internationalization of ICANN, but commercialization. One Fudan University Professor went to such an extreme as to say that the transition was a setback. But that is no longer the case after the ICANN57. The confusion period lasted very shortly and was replaced by very positive remarks from various stakeholders. Most know that privatization does not mean commercialization or industrialization, but bringing ICANN under non-state actors while taking into account advice of governments. In the same way that the privatization of IANA is not real privatization, the multilateralism of WIC Wuzhen Summit is not real multilateralism, and it has never been. A major misunderstanding about WIC Wuzhen Summit has been that it promotes solely multilateralism and cyber sovereignty. That is only half true. The Wuzhen Summit has been an evolutionary process. The 2014 summit expressed China's dissatisfaction about the Snowden Leaks. The 2015 summit used a convenient and conventional tool, cyber sovereignty, for self-defense. The 2016 summit, however, was more committed to building consensus and appealing to global commons, which is closer to ICANN's value of being consensus-driven and One World, One Internet. It is reasonable to argue that the 3rd Wuzhen Summit made a multistakeholder turn. That is not real multistakeholder, of course, but it is an awareness and conception. From the very beginning, the Wuzhen Summit has had robust multistakeholder participation. The 3rd summit was no exception. Yet the 3rd summit moved further and made some constructive linguistic compromise, using the words — multi-players, multi-parties, or multi-actors — to show support of the multistakeholder model. This had not been a smooth process. The confusion about China's position over global Internet governance in general and IANA transition matters, in particular, started from Chinese President Xi Jinping's speech in Brazil on July 17, 2014, in which he spoke of three principles China upheld: multilateral, democratic, and transparent. These three words were apparently borrowed from Paragraph 29 of Tunis Agenda for the Information Society in WSIS 2005. It was inserted into President Xi's speech by a lower bureaucrat either from China's Foreign Ministry or from the Ministry of Industrialization & Information Technology. That frustrated the support of the multista[...]



Overseas TLD Registries Licensed by Chinese Government

2016-12-06T08:37:00-08:00

It was reported that .XYZ, .CLUB and .VIP have obtained official license from the Chinese government. The approval notices can be found on the website of the Ministry of Industry and Information Technology ("MIIT"), the domain name regulator in China.

It is the first batch of overseas top-level domains (TLD) being officially approved. Previously, only two legacy TLDs — .COM and .NET — have been issued such approval. The "green light" means that Chinese registrars are able to sell these domains legally in China and the websites using these domains are allowed to do Bei'an (host the websites and get ICP numbers). All registry operators of these three TLDs adopt the registry backend platform offered by Internet Domain Name System Beijing Engineering Research Center Ltd. ("ZDNS"). The approval indicates that ZDNS's solutions meet the MIIT's requirements on a technical system and the ability to do real name verification.

It appears to be a strong signal that China welcomes an open and competitive domain market, given the demonstration of compliance with rules by registry operators. Inspired by the news, observers feel optimistic that more foreign TLDs will be able to come to the Chinese market, which will, in turn, help build a more dynamic and healthy domain name ecosystem. Firms, end users, and domainers are extremely happy with the availability of more domain extensions, as they have more choices in selecting preferred domain names.

Written by Jian Chuan Zhang, Senior Researcher at KNET and ZDNS

Follow CircleID on Twitter

More under: Domain Names, Policy & Regulation, Top-Level Domains




NTP: The Most Neglected Core Internet Protocol

2016-12-05T09:58:00-08:00

The Internet of today is awash with networking protocols, but at its core lie a handful that fundamentally keep the Internet functioning. From my perspective, there is no modern Internet without DNS, HTTP, SSL, BGP, SMTP, and NTP. Of these most important Internet protocols, NTP (Network Time Protocol) is the likely least understood and has the least attention and support. Until very recently, it was supported (part-time) by just one person — Harlen Stenn — "who had lost the root passwords to the machine where the source code was maintained (so that machine hadn't received security updates in many years), and that machine ran a proprietary source-control system that almost no one had access to, so it was very hard to contribute to". Just about all secure communication protocols and server synchronization processes require that they have their internal clocks set the same. NTP is the protocol that allows all this to happen. ICEI and CACR have gotten involved with supporting NTP and there are several related protocol advancements underway to increase the security of such vital component of the Internet. NTS (Network Time Security), currently in draft version with the Internet Engineering Task Force (IETF), aims to give administrators a way to add security to NTP and promote secure time synchronization. While there have been remarkably few exploitable vulnerabilities in NTP over the years, the recent growth of DDoS botnets (such as Mirai) utilizing NTP Reflection Attacks shone a new light on its frailties and importance. Some relevant stories on the topic of how frail and vital NTP has become and what's being done to correct the problem can be found at: • Time is Running Out for NTP • NTP: the rebirth of ailing, failing core network infrastructure • The internet's core infrastructure is dangerously unsupported and could crumble (but we can save it!) Written by Gunter Ollmann, Chief Security Officer at VectraFollow CircleID on TwitterMore under: DDoS, Security [...]



Deadline of Dec 11 for Nominations for Public Interest Registry (.ORG Operator) Board of Directors

2016-12-05T07:03:00-08:00

Would you be interested in helping guide the future of the Public Interest Registry (PIR), the non-profit operator of the .ORG, .NGO and .ONG domains? If so, the Internet Society is seeking nominations for three positions on the PIR Board of Directors. The nominations deadline is Sunday, December 11, 2016.

More information about the positions and the required qualifications can be found at: http://www.internetsociety.org/call-nominations-pir-board-directors

As noted on that page:

The Internet Society is now accepting nominations for the Board of Directors of the Public Interest Registry (PIR). PIR's business is to manage the international registry of .org, .ngo, and .ong domain names, as well as associated Internationalized Domain Names (IDNs), and the new OnGood business.

In 2017 there are three positions opening on the PIR Board. Directors will serve a 3-year term that begins in April 2017 and expires in April 2020.

If you are interested in being considered as a candidate, please see the form to submit toward the bottom of the info page.

P.S. In full disclosure, the Internet Society is my employer but I have no direct connection to PIR and am passing this along purely because I think members of the CircleID community of readers might be excellent candidates for these positions.

Written by Dan York, Author and Speaker on Internet technologies - and on staff of Internet Society

Follow CircleID on Twitter

More under: Domain Names, Registry Services, Top-Level Domains




Google Fiber in Havana - Wishful Thinking?

2016-12-02T14:50:00-08:00

This post is conjecture, but it is informed conjecture Consider the following: • When Google Fiber started in Kansas City, most people assumed that it was a demonstration project, intended to spur investment by the incumbent US Internet service providers (ISPs). Few thought that Google wanted to become a retail ISP. • Google Fiber garnered a lot of publicity and Google, began speaking of it as a real, profit-making business. They announced other cities and started laying fiber in some of them. • Last June, Google bought Webpass, a small ISP that deploys fiber and was experimenting with unproven, but perhaps revolutionary pCell wireless technology from Artemis Networks. I speculated that they might be thinking of shifting Google Fiber to a hybrid fiber-wireless model based on that acquisition and other experiments they were conducting. • Last October Google Fiber announced that their work would continue in cities where they had launched or were under construction, but they would "pause operations and offices" in cities in which they had been conducting exploratory discussions and they took many, but not all workers off the Google Fiber project. • Google's Project Link has installed wholesale fiber backbones in two African capitals and I have suggested and speculated that they might do the same in Havana (with the caveat that they do it in conjunction with ETECSA since there are no competing retail ISPs in Cuba as there are in Africa). • Last July ETECSA announced that they would be running a fiber trial in parts of Old Havana. They did not specify if it was fiber to the premises or neighborhood. • A month ago, a friend told me that a friend of his who worked at ETECSA said the fiber trial would begin December 5. • Last week, Trump threatened to "terminate the deal" (whatever that means to him) if Cuba would not make it better. • Yesterday, nearly identical stories suggesting that the White House was pushing Cuba on deals with Google and General Electric were published in the Wall Street Journal and El Nuevo Herald. That is all for real — now for the conjecture ... Maybe the trial in Old Havana will be a joint project between Google and ETECSA. Google has considerable fiber installation experience with Project Link in Africa and Google Fiber in the US. A joint project with ETECSA would be relatively simple because they would not have to deal with competing ISPs as in Africa or lawsuits and other obstacles from incumbent ISPs as in the United States. It could either be a pilot experiment — a trial — or the first step in leapfrogging Havana's connectivity infrastructure. One can imagine Google installing a fiber backbone in Havana like they have done in Accra and Kampala and leaving it up to ETECSA to connect premises using a mix of fiber, coaxial cable and wireless technology. If that were to happen, Havana could "leapfrog" from one of the worst connected capital cities in the world to a model of next-generation technology. If things went well in Havana, which city would be next? The partnership between Google and ETECSA could take many forms. Google might supply expertise and capital and ETECSA could supply labor and deal with the Cuban and Havana bureaucracies. In return, Google would get terrific publicity, a seat at the table when other Cuban infrastructures like data centers or video production facilities were discussed and more users to click on their ads. (Take that Facebook). Havana could also serve as a model and reference-sell for cooperation between Google and other cities. (Take that Comcast and AT&T). There might even be some revenue sharing, with ETECSA paying Google as the ISPs do in Africa. This would also be[...]



Over $31 Million Stolen by Hackers from Russian Central Bank

2016-12-02T14:43:00-08:00

Hackers have stolen over 2 billion rubles ($31 million) from correspondent accounts at the Russian central bank, the bank reported today — the latest example of an escalation of cyber attacks on financial institutions around the globe. Reuters reports: "Central bank official Artyom Sychyov discussed the losses at a briefing, saying that the hackers had attempted to steal about 5 billion rubles. Sychyov was commenting on a central bank report released earlier in the day, that told about hackers breaking into accounts there by faking a client's credentials. The bank provided few other details in its lengthy report.

Update, Dec 9: "Russian authorities arrested a large number of suspects in May in connection with the recently revealed electronic theft of $19 million from accounts held at the Russian central bank," Alexander Winning and Elena Fabrichnaya reporting from Moscow in Reuters. "Artyom Sychyov, deputy head of the Bank of Russia's security directorate, said the Federal Security Service, or FSB, and the Interior Ministry, which oversees the police, had run a joint operation after the Russian heist, and that 'a large number of people were arrested'."

Follow CircleID on Twitter

More under: Cybercrime




Cyberattack Cuts Off Thousands of TalkTalk, Post Office Customers in UK

2016-12-01T15:15:00-08:00

Thousands of TalkTalk and Post Office customers in the UK have had their Internet access cut by an attack targeting certain types of Internet routers, according to a BBC report on Thursday. "A spokeswoman for the Post Office told the BBC that the problem began on Sunday and had affected about 100,000 of its customers. Talk Talk also confirmed that some of its customers had been affected, and it was working on a fix. It is not yet known who is responsible for the attack. It involves the use of a modified form of the Mirai worm." Last week Germany's Deutsche Telekom reported close to a million of its customers had lost their internet connection as a result of the attack. Mirai was also involved in the historic October attack disrupting world's leading websites.

Follow CircleID on Twitter

More under: Cyberattack, DDoS




Gambia Criticized for Shutting Down Communication Networks on Election Day

2016-12-01T14:20:00-08:00

(image) Gambia election day – Internet and international calls banned"Communication blackout shatters illusion of freedom during the election," says Amnesty International in a statement on Thursday. Amid blocks on the Internet and other communications networks in Gambia during today's presidential election, Samira Daoud, Amnesty International's Deputy Regional Director for West and Central Africa said: "This is an unjustified and crude attack on the right to freedom of expression in Gambia, with mobile internet services and text messaging cut off on polling day. Shutting down these communication networks shatters the illusion of freedom that had emerged during the two weeks period of the electoral campaign, when restrictions appeared to have been eased. ... Blocks on the internet and other communications networks amount to a flagrant violation of the right to freedom of expression and access to information. The same rights that people have offline must also be protected online."

— The election features three candidates, President Yahya Jammeh (APRC, Alliance for Patriotic Reconstruction and Construction), Adama Barrow (Coalition 2016, a coalition of opposition parties) and Mama Kandeh (GDC, Gambian Democratic Congress), in an election that will be won by whoever gains the most votes on 1 December. There is no second round and results are expected on 2 December.

— Govt of Gambia orders Internet blackout ahead of national election. Service down since 20:05 UTC on 30-Nov. Dyn Research / Dec 1

(image)

Follow CircleID on Twitter

More under: Censorship




'Avalanche' Network Dismantled in an International Cyber Operation Including Europol and the FBI

2016-12-01T11:31:00-08:00

Global distribution of Avalanche severs. Source: Shadowserver.org / See Entire ImageAfter over four years of investigation, the international criminal infrastructure platform known as 'Avalanche' is reported to have been dismantled via a collaborative effort involving Public Prosecutor's Office Verden and the Lüneburg Police (Germany) in close cooperation with the United States Attorney's Office for the Western District of Pennsylvania, the Department of Justice and the FBI, Europol, Eurojust and global partners. The takedown also required help from INTERPOL, the Shadowserver Foundation, Registrar of Last Resort, ICANN and domain name registries. Additional information below from the official report: — 5 individuals were arrested, 37 premises were searched, and 39 servers were seized. Victims of malware infections were identified in over 180 countries. Also, 221 servers were put offline through abuse notifications sent to the hosting providers. The operation marks the largest-ever use of sinkholing to combat botnet infrastructures and is unprecedented in its scale, with over 800,000 domains seized, sinkholed or blocked. — The Avalanche network was used as a delivery platform to launch and manage mass global malware attacks and money mule recruiting campaigns. It has caused an estimated EUR 6 million in damages in concentrated cyberattacks on online banking systems in Germany alone. — Monetary losses associated with malware attacks conducted over the Avalanche network are estimated to be in the hundreds of millions of euros worldwide, although exact calculations are difficult due to the high number of malware families managed through the platform. — What made the 'Avalanche' infrastructure special was the use of the so-called double fast flux technique. The complex setup of the Avalanche network was popular amongst cybercriminals, because of the double fast flux technique offering enhanced resilience to takedowns and law enforcement action. — Malware campaigns that were distributed through this network include around 20 different malware families such as goznym, marcher, matsnu, urlzone, xswkit, and pandabanker. The money mule schemes operating over Avalanche involved highly organised networks of “mules” that purchased goods with stolen funds, enabling cyber-criminals to launder the money they acquired through the malware attacks or other illegal means. — Infographic / Operation Avalanche: Click here to see infographic illustrating the Avalanche operation. The detailed technical infographic also provided here. Additional reports: — Shadowserver: Avalanche Law Enforcement Take Down — Krebs on Security: 'Avalanche' Global Fraud Ring Dismantled Follow CircleID on TwitterMore under: Cybercrime, Malware [...]



The Purple Team Pentest

2016-11-30T10:39:00-08:00

It's not particularly clear whether a marketing intern thought he was being clever or a fatigued pentester thought she was being cynical when the term "Purple Team Pentest" was first thrown around like spaghetti at the fridge door, but it appears we're now stuck with the term for better or worse. Just as the definition of penetration testing has broadened to the point that we commonly label a full-scope penetration of a target's systems with the prospect of lateral compromise and social engineering as a Red Team Pentest — delivered by a "Red Team" entity operating from a sophisticated hacker's playbook. We now often acknowledge the client's vigilant security operations and incident response team as the "Blue Team" — charged with detecting and defending against security threats or intrusions on a 24x7 response cycle. Requests for penetration tests (Black-box, Gray-box, White-box, etc.) are typically initiated and procured by a core information security team within an organization. This core security team tends to operate at a strategic level within the business — advising business leaders and stakeholders of new threats, reviewing security policies and practices, coordinating critical security responses, evaluating new technologies, and generally being the go-to-guys for out-of-ordinary security issues. When it comes to penetration testing, the odds are high that some members are proficient with common hacking techniques and understand the technical impact of threats upon the core business systems. These are the folks that typically scope and eventually review the reports from a penetration test — they are however NOT the "Blue Team", but they may help guide and at times provide third-line support to security operations people. No, the nucleus of a Blue Team are the front-line personnel watching over SIEM's, reviewing logs, initiating and responding to support tickets, and generally swatting down each detected threat as it appears during their shift. Blue Teams are defensively focused and typically proficient at their operational security tasks. The highly-focused nature of their role does however often mean that they lack what can best be described as a "hackers eye view" of the environment they're tasked with defending. Traditional penetration testing approaches are often adversarial. The Red Team must find flaws, compromise systems, and generally highlight the failures in the targets security posture. The Blue Team faces the losing proposition of having to had already secured and remediated all possible flaws prior to the pentest, and then reactively respond to each vulnerability they missed — typically without comprehension of the tools or techniques the Red Team leveraged in their attack. Is it any wonder that Blue Teams hate traditional pentests? Why aren't the Red Team consultants surprised that the same tools and attack vectors work a year later against the same targets? A Purple Team Pentest should be thought of as a dynamic amalgamation of Red Team and Blue Team members with the purpose of overcoming communication hurdles, facilitating knowledge transfer, and generally arming the Blue Team with newly practiced skills against a more sophisticated attacker or series of attack scenarios. How to Orchestrate a Purple Team Pentest Engagement Very few organizations have their own internal penetration testing team and even those that do, regularly utilize external consulting companies to augment that internal team to ensure the appropriate skills are on hand and to tackle more sophisticated pentesting demands. A Purple Team Pentest almost al[...]



Likely and Behind the Scenes Changes at the FCC

2016-11-30T07:47:00-08:00

It should come as no surprise that the Federal Communications Commission will substantially change its regulatory approach, wingspan and philosophy under a Trump appointed Chairman. One can readily predict that the new FCC will largely undo what has transpired in previous years. However, that conclusion warrants greater calibration. As a threshold matter, the new senior managers at the FCC will have to establish new broad themes and missions. They have several options, some of which will limit how deregulatory and libertarian the Commission can proceed. Several ways forward come to mind: Channeling Trump Populism – the FCC can execute President Trump's mission of standing up to cronyism and rent seeking, even when it harms traditional constituencies and stakeholders. What's Good for Incumbents is Good for America – the FCC can revert to the comfortable and typical bias in favor of incumbents like Comcast, Verizon, AT&T and the major broadcast networks. A Libertarian Credo – the FCC can reduce its regulatory wingspan, budget and economic impact by concentrating on limited core statutory mandates, such as spectrum management. Humility – without having the goal of draining the FCC's pond, senior managers can temper their partisanship and snarkiness by refraining from mission creep. Each of the above scenarios hints at major and equally significant, but unpublicized changes at the agency. A populist FCC equates the public interest with what the court of public opinion supports. For example, most consumers like subsidies that make products and services appear free. A populist FCC responds to consumers by interpreting network neutrality rules as allowing zero rating and sponsored data plans. However, a populist FCC risks overemphasis on public opinion that stakeholders can energize as occurred when companies like Netflix and Google used their websites for 24/7 opposition to the Stop Online Piracy Act and when Jon Oliver motivated 4 million viewers to file informal comments favoring network neutrality on the overburdened FCC website. On the other hand, a populist FCC can remind rural residents of how much they count in this new political environment. The FCC can validate rural constituencies by refraining from modifying — if not eliminating — inefficient and poorly calibrated universal service cross-subsidies. Most telephone subscribers in the U.S. do not realize that they are paying a 10%+ surcharge on their bills to support universal service funding, most of which flows to incumbent telephone companies. Consumers would quickly contract compassion fatigue if knew about this sweetheart arrangement. The favoring incumbents scenario has a long and tawdry history at the FCC. If the new FCC reverts to this model, the Commission will largely give up fining companies for regulatory violations. Additionally, it might purport to reintroduce economic analysis to its decision making by adopting incumbent-advocated, but highly controversial templates. For example, incumbents have touted the "Rule of 3" to support further industry consolidation. This rule is nothing more than an advocacy viewpoint that markets with 3 competitors generate most of the consumer benefits accruing from markets with more than 3 competitors. Having only 3 competitors may work if 1 of them does not collude and match the terms, conditions and prices offered by the other 2. But in many markets — think commercial aviation — having only 3 operators risks markets organized to extract maximum revenues from consumers with little incentive to innovate a[...]



Court Dismisses .Web Lawsuit, Says Agreement Not to Sue Is Enforceable

2016-11-29T15:52:00-08:00

"Judge Percy Anderson of the U.S. District Court, Central District of California has granted ICANN's motion to dismiss in a lawsuit brought by a subsidiary of new TLD company Donuts," reports Andrew Allemann in Domain Name Wire. "Donuts filed a lawsuit because it was upset that Verisign was bankrolling another applicant's bid for the domain. Donuts believed that the applicant, Nu Dot Co, had undergone changes that required updating information with ICANN prior to the auction. ... But new TLD applicants agreed to not sue ICANN. Donuts argued to the court that this covenant not to sue was unenforceable because it was void under California law and unconscionable."

Follow CircleID on Twitter

More under: ICANN, Law, Top-Level Domains




Shadow Regulations and You: One More Way the Internet's Integrity Can Be Won

2016-11-29T14:51:00-08:00

Even those who care about net neutrality might not have heard of the aptly-called Shadow Regulations. These back-room agreements among companies regulate Internet content for a number of legitimate issues, including curbing hate speech, terrorism, and protecting intellectual property and the safety of children. While in name they may be noble, in actuality there are very serious concerns that Shadow Regulations are implemented without the transparency, accountability, and inclusion of stakeholders necessary to protect free speech on the Internet. A recent SF-Bay Internet Society (ISOC) Chapter event, co-hosted by the Electronic Frontier Foundation (EFF) in collaboration with the global Internet Society, put the spotlight on how to improve these agreements. The keynote speakers from EFF, Mitch Stoltz, Senior Staff Attorney and Jeremy Malcolm, Senior Global Analyst, acknowledged that there is a place for Shadow Regulations in an open Internet, but not without some serious modifications. After all, the basis of the Internet is the voluntary adoption of standards, and Shadow Regulations have the benefit of crossing borders and being more flexible, cheaper, and faster than traditional legislation. These regulations can take many forms, including codes, standard, principles, and memorandums of understanding (MOUs), and can pop up at many vulnerable links across the Internet, which the EFF calls Free Speech Weak Links. So when should the public be concerned about Shadow Regulations encroaching on Internet freedoms? Whenever there is no space for transparency, accountability and user participation, very shady Shadow Regulations can be implemented. Take, for example, policy laundering: when governments want to implement unethical policies, such as curtailing freedom of speech, they can place the blame on companies through these regulations. Stoltz explained, "It's an abdication of responsibility to pressure platforms like Facebook to come up with a policy and enforce it while government washes its hands [of any responsibility]." When governments are backing these agreements, they're not necessarily voluntary, as companies might be engaging to curry governmental favor. In the current system, an industry can restrict content and then prop itself up as judge, jury, and executioner. Spreading these roles across impartial bodies with multi-stakeholder processes is one obvious solution. This requires balance, inclusion, and accountability: one stakeholder cannot overpower the others, the right stakeholders have to participate and be given resources to participate, and there need to be standards set that keep the body and stakeholders accountable from and to each other. In some cases, Shadow Regulations won't be the most effective solution: for example, in the case of hate speech, it may be more effective to empower users to limit their exposure to it rather than trying to erase it off the Internet. * * * Learn more about what the EFF is doing with its Shadow Regulation Project and watch the video from this event. Become a member of the San Francisco-Bay Area Internet Society Chapter to to support more events like this. About the SF-Bay Area Chapter: The San Francisco Bay Area ISOC Chapter serves California, including the Bay Area and Silicon Valley, by promoting the core values of the Internet Society. Through its work, the Chapter promotes open development, evolution and access to the Internet for those in its geographical region and beyond. This article was written by Jenna Spagnolo to support the SF Bay ISOC C[...]