2016-12-02T14:50:00-08:00This post is conjecture, but it is informed conjecture Consider the following: • When Google Fiber started in Kansas City, most people assumed that it was a demonstration project, intended to spur investment by the incumbent US Internet service providers (ISPs). Few thought that Google wanted to become a retail ISP. • Google Fiber garnered a lot of publicity and Google, began speaking of it as a real, profit-making business. They announced other cities and started laying fiber in some of them. • Last June, Google bought Webpass, a small ISP that deploys fiber and was experimenting with unproven, but perhaps revolutionary pCell wireless technology from Artemis Networks. I speculated that they might be thinking of shifting Google Fiber to a hybrid fiber-wireless model based on that acquisition and other experiments they were conducting. • Last October Google Fiber announced that their work would continue in cities where they had launched or were under construction, but they would "pause operations and offices" in cities in which they had been conducting exploratory discussions and they took many, but not all workers off the Google Fiber project. • Google's Project Link has installed wholesale fiber backbones in two African capitals and I have suggested and speculated that they might do the same in Havana (with the caveat that they do it in conjunction with ETECSA since there are no competing retail ISPs in Cuba as there are in Africa). • Last July ETECSA announced that they would be running a fiber trial in parts of Old Havana. They did not specify if it was fiber to the premises or neighborhood. • A month ago, a friend told me that a friend of his who worked at ETECSA said the fiber trial would begin December 5. • Last week, Trump threatened to "terminate the deal" (whatever that means to him) if Cuba would not make it better. • Yesterday, nearly identical stories suggesting that the White House was pushing Cuba on deals with Google and General Electric were published in the Wall Street Journal and El Nuevo Herald. That is all for real — now for the conjecture ... Maybe the trial in Old Havana will be a joint project between Google and ETECSA. Google has considerable fiber installation experience with Project Link in Africa and Google Fiber in the US. A joint project with ETECSA would be relatively simple because they would not have to deal with competing ISPs as in Africa or lawsuits and other obstacles from incumbent ISPs as in the United States. It could either be a pilot experiment — a trial — or the first step in leapfrogging Havana's connectivity infrastructure. One can imagine Google installing a fiber backbone in Havana like they have done in Accra and Kampala and leaving it up to ETECSA to connect premises using a mix of fiber, coaxial cable and wireless technology. If that were to happen, Havana could "leapfrog" from one of the worst connected capital cities in the world to a model of next-generation technology. If things went well in Havana, which city would be next? The partnership between Google and ETECSA could take many forms. Google might supply expertise and capital and ETECSA could supply labor and deal with the Cuban and Havana bureaucracies. In return, Google would get terrific publicity, a seat at the table when other Cuban infrastructures like data centers or video production facilities were discussed and more users to click on their ads. (Take that Facebook). Havana could also serve as a model and reference-sell for cooperation between Google and other cities. (Take that Comcast and AT&T). There might even be some revenue sharing, with ETECSA paying Google as the ISPs do in Africa. This would also be a win for the US administration and President Obama's legacy. Trump says he wants to renegotiate "the deal" with Cuba. If so, he would find Google (and GE?) at the negotiating table along with US airlines, telephone companies, hotel chains, cruise lines, etc. Again — this is 100% unfounded [...]
Hackers have stolen over 2 billion rubles ($31 million) from correspondent accounts at the Russian central bank, the bank reported today — the latest example of an escalation of cyber attacks on financial institutions around the globe. Reuters reports: "Central bank official Artyom Sychyov discussed the losses at a briefing, saying that the hackers had attempted to steal about 5 billion rubles. Sychyov was commenting on a central bank report released earlier in the day, that told about hackers breaking into accounts there by faking a client's credentials. The bank provided few other details in its lengthy report."
Follow CircleID on Twitter
More under: Cybercrime
Thousands of TalkTalk and Post Office customers in the UK have had their Internet access cut by an attack targeting certain types of Internet routers, according to a BBC report on Thursday. "A spokeswoman for the Post Office told the BBC that the problem began on Sunday and had affected about 100,000 of its customers. Talk Talk also confirmed that some of its customers had been affected, and it was working on a fix. It is not yet known who is responsible for the attack. It involves the use of a modified form of the Mirai worm." Last week Germany's Deutsche Telekom reported close to a million of its customers had lost their internet connection as a result of the attack. Mirai was also involved in the historic October attack disrupting world's leading websites.
Follow CircleID on Twitter
(image) Gambia election day – Internet and international calls banned"Communication blackout shatters illusion of freedom during the election," says Amnesty International in a statement on Thursday. Amid blocks on the Internet and other communications networks in Gambia during today's presidential election, Samira Daoud, Amnesty International's Deputy Regional Director for West and Central Africa said: "This is an unjustified and crude attack on the right to freedom of expression in Gambia, with mobile internet services and text messaging cut off on polling day. Shutting down these communication networks shatters the illusion of freedom that had emerged during the two weeks period of the electoral campaign, when restrictions appeared to have been eased. ... Blocks on the internet and other communications networks amount to a flagrant violation of the right to freedom of expression and access to information. The same rights that people have offline must also be protected online."
— The election features three candidates, President Yahya Jammeh (APRC, Alliance for Patriotic Reconstruction and Construction), Adama Barrow (Coalition 2016, a coalition of opposition parties) and Mama Kandeh (GDC, Gambian Democratic Congress), in an election that will be won by whoever gains the most votes on 1 December. There is no second round and results are expected on 2 December.
— Govt of Gambia orders Internet blackout ahead of national election. Service down since 20:05 UTC on 30-Nov. Dyn Research / Dec 1
Follow CircleID on Twitter
More under: Censorship
2016-12-01T11:31:00-08:00Global distribution of Avalanche severs. Source: Shadowserver.org / See Entire ImageAfter over four years of investigation, the international criminal infrastructure platform known as 'Avalanche' is reported to have been dismantled via a collaborative effort involving Public Prosecutor's Office Verden and the Lüneburg Police (Germany) in close cooperation with the United States Attorney's Office for the Western District of Pennsylvania, the Department of Justice and the FBI, Europol, Eurojust and global partners. The takedown also required help from INTERPOL, the Shadowserver Foundation, Registrar of Last Resort, ICANN and domain name registries. Additional information below from the official report: — 5 individuals were arrested, 37 premises were searched, and 39 servers were seized. Victims of malware infections were identified in over 180 countries. Also, 221 servers were put offline through abuse notifications sent to the hosting providers. The operation marks the largest-ever use of sinkholing to combat botnet infrastructures and is unprecedented in its scale, with over 800,000 domains seized, sinkholed or blocked. — The Avalanche network was used as a delivery platform to launch and manage mass global malware attacks and money mule recruiting campaigns. It has caused an estimated EUR 6 million in damages in concentrated cyberattacks on online banking systems in Germany alone. — Monetary losses associated with malware attacks conducted over the Avalanche network are estimated to be in the hundreds of millions of euros worldwide, although exact calculations are difficult due to the high number of malware families managed through the platform. — What made the 'Avalanche' infrastructure special was the use of the so-called double fast flux technique. The complex setup of the Avalanche network was popular amongst cybercriminals, because of the double fast flux technique offering enhanced resilience to takedowns and law enforcement action. — Malware campaigns that were distributed through this network include around 20 different malware families such as goznym, marcher, matsnu, urlzone, xswkit, and pandabanker. The money mule schemes operating over Avalanche involved highly organised networks of “mules” that purchased goods with stolen funds, enabling cyber-criminals to launder the money they acquired through the malware attacks or other illegal means. — Infographic / Operation Avalanche: Click here to see infographic illustrating the Avalanche operation. The detailed technical infographic also provided here. Additional reports: — Shadowserver: Avalanche Law Enforcement Take Down — Krebs on Security: 'Avalanche' Global Fraud Ring Dismantled Follow CircleID on TwitterMore under: Cybercrime, Malware [...]
2016-11-30T10:39:00-08:00It's not particularly clear whether a marketing intern thought he was being clever or a fatigued pentester thought she was being cynical when the term "Purple Team Pentest" was first thrown around like spaghetti at the fridge door, but it appears we're now stuck with the term for better or worse. Just as the definition of penetration testing has broadened to the point that we commonly label a full-scope penetration of a target's systems with the prospect of lateral compromise and social engineering as a Red Team Pentest — delivered by a "Red Team" entity operating from a sophisticated hacker's playbook. We now often acknowledge the client's vigilant security operations and incident response team as the "Blue Team" — charged with detecting and defending against security threats or intrusions on a 24x7 response cycle. Requests for penetration tests (Black-box, Gray-box, White-box, etc.) are typically initiated and procured by a core information security team within an organization. This core security team tends to operate at a strategic level within the business — advising business leaders and stakeholders of new threats, reviewing security policies and practices, coordinating critical security responses, evaluating new technologies, and generally being the go-to-guys for out-of-ordinary security issues. When it comes to penetration testing, the odds are high that some members are proficient with common hacking techniques and understand the technical impact of threats upon the core business systems. These are the folks that typically scope and eventually review the reports from a penetration test — they are however NOT the "Blue Team", but they may help guide and at times provide third-line support to security operations people. No, the nucleus of a Blue Team are the front-line personnel watching over SIEM's, reviewing logs, initiating and responding to support tickets, and generally swatting down each detected threat as it appears during their shift. Blue Teams are defensively focused and typically proficient at their operational security tasks. The highly-focused nature of their role does however often mean that they lack what can best be described as a "hackers eye view" of the environment they're tasked with defending. Traditional penetration testing approaches are often adversarial. The Red Team must find flaws, compromise systems, and generally highlight the failures in the targets security posture. The Blue Team faces the losing proposition of having to had already secured and remediated all possible flaws prior to the pentest, and then reactively respond to each vulnerability they missed — typically without comprehension of the tools or techniques the Red Team leveraged in their attack. Is it any wonder that Blue Teams hate traditional pentests? Why aren't the Red Team consultants surprised that the same tools and attack vectors work a year later against the same targets? A Purple Team Pentest should be thought of as a dynamic amalgamation of Red Team and Blue Team members with the purpose of overcoming communication hurdles, facilitating knowledge transfer, and generally arming the Blue Team with newly practiced skills against a more sophisticated attacker or series of attack scenarios. How to Orchestrate a Purple Team Pentest Engagement Very few organizations have their own internal penetration testing team and even those that do, regularly utilize external consulting companies to augment that internal team to ensure the appropriate skills are on hand and to tackle more sophisticated pentesting demands. A Purple Team Pentest almost always utilizes the services of an external pentest team — ideally one that is accomplished and experienced in Red Team pentesting. Bringing together two highly skilled security teams — one in attack, the other in defense — and having them not only work together, but to also achiev[...]
2016-11-30T07:47:00-08:00It should come as no surprise that the Federal Communications Commission will substantially change its regulatory approach, wingspan and philosophy under a Trump appointed Chairman. One can readily predict that the new FCC will largely undo what has transpired in previous years. However, that conclusion warrants greater calibration. As a threshold matter, the new senior managers at the FCC will have to establish new broad themes and missions. They have several options, some of which will limit how deregulatory and libertarian the Commission can proceed. Several ways forward come to mind: Channeling Trump Populism – the FCC can execute President Trump's mission of standing up to cronyism and rent seeking, even when it harms traditional constituencies and stakeholders. What's Good for Incumbents is Good for America – the FCC can revert to the comfortable and typical bias in favor of incumbents like Comcast, Verizon, AT&T and the major broadcast networks. A Libertarian Credo – the FCC can reduce its regulatory wingspan, budget and economic impact by concentrating on limited core statutory mandates, such as spectrum management. Humility – without having the goal of draining the FCC's pond, senior managers can temper their partisanship and snarkiness by refraining from mission creep. Each of the above scenarios hints at major and equally significant, but unpublicized changes at the agency. A populist FCC equates the public interest with what the court of public opinion supports. For example, most consumers like subsidies that make products and services appear free. A populist FCC responds to consumers by interpreting network neutrality rules as allowing zero rating and sponsored data plans. However, a populist FCC risks overemphasis on public opinion that stakeholders can energize as occurred when companies like Netflix and Google used their websites for 24/7 opposition to the Stop Online Piracy Act and when Jon Oliver motivated 4 million viewers to file informal comments favoring network neutrality on the overburdened FCC website. On the other hand, a populist FCC can remind rural residents of how much they count in this new political environment. The FCC can validate rural constituencies by refraining from modifying — if not eliminating — inefficient and poorly calibrated universal service cross-subsidies. Most telephone subscribers in the U.S. do not realize that they are paying a 10%+ surcharge on their bills to support universal service funding, most of which flows to incumbent telephone companies. Consumers would quickly contract compassion fatigue if knew about this sweetheart arrangement. The favoring incumbents scenario has a long and tawdry history at the FCC. If the new FCC reverts to this model, the Commission will largely give up fining companies for regulatory violations. Additionally, it might purport to reintroduce economic analysis to its decision making by adopting incumbent-advocated, but highly controversial templates. For example, incumbents have touted the "Rule of 3" to support further industry consolidation. This rule is nothing more than an advocacy viewpoint that markets with 3 competitors generate most of the consumer benefits accruing from markets with more than 3 competitors. Having only 3 competitors may work if 1 of them does not collude and match the terms, conditions and prices offered by the other 2. But in many markets — think commercial aviation — having only 3 operators risks markets organized to extract maximum revenues from consumers with little incentive to innovate and compete. An incumbent friendly FCC likely will approve mergers and acquisitions with limited, if any, conditions and negotiated conditions. This kind of FCC will approve AT&T's acquisition of Time Warner despite President Trump's disapproval. The FCC probably also would have no problem with [...]
"Judge Percy Anderson of the U.S. District Court, Central District of California has granted ICANN's motion to dismiss in a lawsuit brought by a subsidiary of new TLD company Donuts," reports Andrew Allemann in Domain Name Wire. "Donuts filed a lawsuit because it was upset that Verisign was bankrolling another applicant's bid for the domain. Donuts believed that the applicant, Nu Dot Co, had undergone changes that required updating information with ICANN prior to the auction. ... But new TLD applicants agreed to not sue ICANN. Donuts argued to the court that this covenant not to sue was unenforceable because it was void under California law and unconscionable."
Follow CircleID on Twitter
2016-11-29T14:51:00-08:00Even those who care about net neutrality might not have heard of the aptly-called Shadow Regulations. These back-room agreements among companies regulate Internet content for a number of legitimate issues, including curbing hate speech, terrorism, and protecting intellectual property and the safety of children. While in name they may be noble, in actuality there are very serious concerns that Shadow Regulations are implemented without the transparency, accountability, and inclusion of stakeholders necessary to protect free speech on the Internet. A recent SF-Bay Internet Society (ISOC) Chapter event, co-hosted by the Electronic Frontier Foundation (EFF) in collaboration with the global Internet Society, put the spotlight on how to improve these agreements. The keynote speakers from EFF, Mitch Stoltz, Senior Staff Attorney and Jeremy Malcolm, Senior Global Analyst, acknowledged that there is a place for Shadow Regulations in an open Internet, but not without some serious modifications. After all, the basis of the Internet is the voluntary adoption of standards, and Shadow Regulations have the benefit of crossing borders and being more flexible, cheaper, and faster than traditional legislation. These regulations can take many forms, including codes, standard, principles, and memorandums of understanding (MOUs), and can pop up at many vulnerable links across the Internet, which the EFF calls Free Speech Weak Links. So when should the public be concerned about Shadow Regulations encroaching on Internet freedoms? Whenever there is no space for transparency, accountability and user participation, very shady Shadow Regulations can be implemented. Take, for example, policy laundering: when governments want to implement unethical policies, such as curtailing freedom of speech, they can place the blame on companies through these regulations. Stoltz explained, "It's an abdication of responsibility to pressure platforms like Facebook to come up with a policy and enforce it while government washes its hands [of any responsibility]." When governments are backing these agreements, they're not necessarily voluntary, as companies might be engaging to curry governmental favor. In the current system, an industry can restrict content and then prop itself up as judge, jury, and executioner. Spreading these roles across impartial bodies with multi-stakeholder processes is one obvious solution. This requires balance, inclusion, and accountability: one stakeholder cannot overpower the others, the right stakeholders have to participate and be given resources to participate, and there need to be standards set that keep the body and stakeholders accountable from and to each other. In some cases, Shadow Regulations won't be the most effective solution: for example, in the case of hate speech, it may be more effective to empower users to limit their exposure to it rather than trying to erase it off the Internet. * * * Learn more about what the EFF is doing with its Shadow Regulation Project and watch the video from this event. Become a member of the San Francisco-Bay Area Internet Society Chapter to to support more events like this. About the SF-Bay Area Chapter: The San Francisco Bay Area ISOC Chapter serves California, including the Bay Area and Silicon Valley, by promoting the core values of the Internet Society. Through its work, the Chapter promotes open development, evolution and access to the Internet for those in its geographical region and beyond. This article was written by Jenna Spagnolo to support the SF Bay ISOC Chapter. Written by Jenna SpagnoloFollow CircleID on TwitterMore under: Censorship, Internet Governance, Net Neutrality, Policy & Regulation, Web [...]
2016-11-29T13:23:00-08:00I've written posts about trolls in Cuba, where Operation Truth is said to use a thousand university-student trolls and trolls in China where government workers fabricate an estimated 488 million social media posts annually. Now we are reading about Russian government trolls. Just before the election, this post documented Russian trolling and warned that "Trump isn't the end of Russia's information war against America. They are just getting started." "In Internet slang, a troll (/ˈtroʊl/, /ˈtrɒl/) is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages in an online community (such as a newsgroup, forum, chat room, or blog) with the intent of provoking readers into an emotional response…" Internet troll, Wikipedia https://en.wikipedia.org/wiki/Internet_troll After the election a new site, PropOrNot.com (propaganda or not) came online. Their mission is outing Russian propaganda using a combination of forensic online sleuthing and crowdsource reporting and they have compiled a list of 200 sites that rapidly spread stories written by Russian trolls. (More about PropOrNot here). But, is PropOrNot what it claims to be? The people behind the site remain anonymous (for understandable reasons) and their domain name registration is private. How do they determine that a site is home for Russian content? Is there a chance that they are pro-Clinton, sour-grapes trolls? Might trolls and hackers figure out ways to game ProOrNot and get sites they oppose blacklisted? Hmmm — I wonder if the US government hires trolls and, if not, should they? How about Canada? Chile? Zambia? How about Exxon Mobile trolls or McDonalds trolls? Is it trolls all the way down? The fake news and trolling revealed during the last few months of the US political campaign has sowed doubts about everything we see and read online. We're beginning the transition from "critical thinking" to "paranoid thinking." In 1961, Newton N. Minow gave a talk to the National Association of Broadcasters in which he worried that television was becoming a "vast wasteland:" But when television is bad, nothing is worse. I invite each of you to sit down in front of your television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland. Will the Internet become a vast wasteland? Newton Minnow was correct, but there were and still are oases in the television wasteland. In spite of the trolls, fake news sites, troll-bots, etc. the Internet is and will remain replete with oases, but we cannot ignore the wasteland. * * * Update: I reached out to PropOrNot, pointing out that they do not identify themselves and their domain registration is private and asking how I could know they were not posting false claims themselves. They replied that "We sometimes provide much more background information about ourselves to professional journalists." They have now posted a document on their methodology, showing how they select sites for their list. They are not saying the sites are paid trolls, but that they publish information that originates on Russian government sites — that they disseminate Russian propaganda. At least one of the sites on their list, The Corbett Report, has refuted the claim that they are pro-Russian, but they do not address the question of their distributing material that originated on Russian sites. Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Web [...]
"We are building the Internet Archive of Canada because, to quote our friends at LOCKSS, 'lots of copies keep stuff safe,'" writes founder Brewster Kahle in a blog post on Tuesday. "On November 9th in America, we woke up to a new administration promising radical change. It was a firm reminder that institutions like ours, built for the long-term, need to design for change. For us, it means keeping our cultural materials safe, private and perpetually accessible. It means preparing for a Web that may face greater restrictions." The organization is seeking donations for the project which is estimated to cost millions.
Follow CircleID on Twitter
The operators of geographic top-level domains such as .nyc, .london, .berlin and .tokyo have founded an international non-for-profit association in Brussels. In a press release issued on Monday, the new association, GeoTLD Group announces plans to promote geographic top-level domains, "ensuring they become essential components of the digital infrastructure, benefiting stakeholders of a location, language or culture." Initial members of the GeoTLD Group include Amsterdam, Cape Town, Paris, Sydney and Vienna.
— "For over 30 years cities and regions have had to peg their digital identities to their respective countries top-level domains or international ones. Brussels, for instance, communicated online as www.visitbrussels.be.. With its own top-level domain, the City of Brussels is now of course using www.visit.brussels for its branding, locally and internationally."
— "The new digital identities have been well accepted by Internet users and are increasingly used by everyone locally – from governments and local companies, to individuals. With an international association we are now able to connect and promote the interests of our members and engage the different stakeholders locally, nationally and internationally." –GeoTLD Group's Chairman Sébastien Ducos
— The GeoTLD Group says it also plans to make more cities, regions and communities aware of the advantages of their own local Internet identity.
Follow CircleID on Twitter
More under: Top-Level Domains
2016-11-29T10:27:00-08:00Not infrequently heard in domain name disputes are cries of shock and gnashing of teeth that domain name holders may lawfully offer their inventory at excessive prices. Take for example TOBAM v. M. Thestrup / Best Identity, D2016-1990 (WIPO November 21, 2016) (
According to a new update on Facebook's Internet.org website on Monday, a service called "Express Wifi" has gone live and plans are in place to expand to other regions soon. Express Wifi is a program that allows carriers, internet service providers, and local entrepreneurs work together, says the company, in order to help expand connectivity to underserved locations around the world. Napier Lopez reporting in TNW writes: "Facebook's Free Basics program — an attempt to bring free internet to developing areas — had quite the messy launch. After getting banned in India, now Facebook is trying a different approach. ... Unlike Free Basics, Express Wifi isn't, well, free. Instead, the program allows customers to purchase affordable data packs for access via Wifi."
Follow CircleID on Twitter
More under: Access Providers
For the first time auto makers and wireless carriers are actually seeking common ground around the creation of the wireless new standard, writes Roger Lanctot, Associate Director in the Global Automotive Practice at Strategy Analytics. "Most interesting of all as far as 5G is concerned is the involvement of the automotive industry in setting and testing the standard. ... In fact, the priorities of auto makers are in the forefront as the use cases are particularly suited to safety and smart city applications." However, Lanctot points out that disagreement among wireless experts could influence implementation outcomes. "The resulting confusion threatens to impede the adoption of new technologies as car makers, in particular, may cling to more familiar solutions."
Follow CircleID on Twitter
More under: Wireless
Close to a million Deutsche Telekom customers have had trouble getting online since Sunday afternoon which the company on Monday confirmed to be the result of an "outside" attack. Around 900,000 customers with specific routers are reported to have been affected. "According to our knowledge, an attack on maintenance interfaces is currently taking place worldwide," reported to company on Monday. "This was also confirmed by the Federal Office for Information Security. Following the latest findings, routers of Deutsche Telekom costumers were affected by an attack from outside. Our network was not affected at any time. The attack attempted to infect routers with a malware but failed which caused crashes or restrictions for four to five percent of all routers. This led to a restricted use of Deutsche Telekom services for affected customers. We implemented a series of filter measures to our network."
— Update, Nov 29: "German internet outage was failed botnet attempt," Eric Auchard reporting in Reuters from Frankfurt: "Deutsche Telekom's head of IT Security Thomas Thchersich told the newspaper Der Tagesspiegel that the outages appeared to be tied to a botched attempt to turn a sizeable number of customers' routers into a part of the Mirai botnet."
Follow CircleID on Twitter
2016-11-28T10:54:01-08:00The demand for penetration testing and security assessment services worldwide has been growing year-on-year. Driven largely by Governance, Risk, and Compliance (GRC) concerns, plus an evolving pressure to be observed taking information security and customer privacy seriously, most CIO/CSO/CISO's can expect to conduct regular "pentests" as a means of validating their organizations or product's security. An unfortunate circumstance of two decades of professional service oriented delivery of pentests is that the very term "penetration testing" now covers a broad range of security services and risk attributes — with most consulting firms provide a smorgasbord of differentiated service offerings — intermixing terms such as security assessment and pentest, and constructing hybrid testing methodologies. For those newly tasked with having to find and retain a team capable of delivering a pentest, the prospect of having to decipher the lingo and identify the right service is often daunting — as failure to get it right is not only financially costly but may also be career-ending if later proven to be inadequate. What does today's landscape of pentesting look like? All penetration testing methodologies and delivery approaches are designed to factor-in and illustrate a threat represented by an attack vector or exploitation. A key differentiator between many testing methodologies lies in whether the scope is to identify the presence of a vulnerability or to exploit and subsequently propagate an attack through that vulnerability. The former is generally bucketed in the assessment and audit taxonomy, while the latter is more commonly a definition for penetration testing (or an ethical hack). The penetration testing market and categorization of services is divided by two primary factors — the level of detail that will be provided by the client, and the range of "hacker" tools and techniques that will be allowed as part of the testing. Depending upon the business drivers behind the pentest (e.g. compliance, risk reduction, or attack simulation), there is often a graduated-scale of services. Some of the most common terms used are: Vulnerability Scanning – The use of automated tools to identify hosts, devices, infrastructure, services, applications, and code snippets that may be vulnerable to known attack vectors or have a history of security issues and vulnerabilities. Black-box Pentest – – The application of common attack tools and methodologies against a client-defined target or range of targets in which the pentester is tasked with identifying all the important security vulnerabilities and configuration failures of the scoped engagement. Typically, the penetration scope is limited to approved systems and windows of exploitation to minimize the potential for collateral damage. The client provides little information beyond the scope and expects the consultant to replicate the discovery and attack phases of an attacker who has zero insider knowledge of the environment. Gray-box Pentest – Identical methodology to the Black-box Pentest, but with some degree of insider knowledge transfer. When an important vulnerability is uncovered the consultant will typically liaise with the client to obtain additional "insider information" which can be used to either establish an appropriate risk classification for the vulnerability, or initiate a transfer of additional information about the host or the data it contains (that could likely be gained by successfully exploiting the vulnerability), without having to risk collateral damage or downtime during the testing[...]
2016-11-25T07:48:00-08:00Post-Thanksgiving is a time of reflection where we are thankful for technological improvements that allow us to succeed. Every-so-often, technology comes along that not only improves our business but can also help the world. Cloud computing is such a technology. Transitioning to the cloud is a good choice for just about any business, for several reasons. Cloud applications offer scalability, performance, cost-effectiveness and easy mobile access. However, if you question how your decision to make use of cloud-based software affects the environment, you're not the only one. The advantages that cloud-based solutions offer do in fact have implications for our planet. Cloud services differ from more "traditional" approaches by making a single large data center accessible to multiple businesses. When you consider the electrical needs of several smaller data centers, each with their own standalone cooling systems, though, it begins to make sense how these server arrays can be responsible for about two percent of electricity use in the United States. But is that a bad thing? Maybe not. Leading the Way Not surprisingly, companies that have invested billions making the cloud-based infrastructure we've come to know and love are deeply interested in their ability to prove the cloud's advantages — but they're also interested in how it all impacts the environment. Google, for one, has teamed up with Lawrence Berkeley National Laboratory to investigate the possibility of increasing efficiency, and the results are promising. Scientists speculate that in the near future, we could potentially power all of Los Angeles for a year with the kilowatts saved by moving common apps to the cloud. But where is all of this "wasted power" being used in our current configuration? Take small business, for example. If you run an office with 15 computers, and they stay on all the time, you're not even close to maximizing the energy those computers are using. A recent study revealed that the average computer in a small business setting like this is actually used less than 10 percent of the time it's drawing power. Now imagine moving all of the application processes from the computers in that office out to a data center. Yes, the servers are still on all the time, but a single server in a massive array can handle all of the tasks carried out in the office. During the time your office isn't online, another business can take advantage of that resource. It's a win based on sharing of resources. Counting Carbon Credits It's not just a matter of saving power, though. Huge reductions in carbon footprint can also be realized by migrating to a data-based infrastructure. The process is called dematerialization. Simply put, it means using fewer physical resources to accomplish the same level of productivity. Migrating to the cloud means fewer machines are used, which in turn reduces power costs and the subsequent cooling needs that require more power. Businesses don't have to be large for the gains to be significant, either. If you're an IT manager looking for that last little bit of ammunition in your argument to transition to the cloud, consider shedding some light on the environmental benefits of making the change. Carbon emissions are reduced in several ways by switching to the cloud. For example, large providers can use only the resources they need to accomplish a job, as compared to purchasing multiple computers that see just limited use. When new technologies allow providers to create more efficient data centers, the reduction in their carbon footprint is immediately amp[...]
2016-11-23T11:56:01-08:00Global Internet Report 2016The economics of building trust online; preventing data breachesClick to Download ReportData breaches are the oil spills of the digital economy. Over 429 million people were affected by reported data breaches in 2015 — and that number is certain to grow even higher in 2016. These large-scale data breaches along with uncertainties about the use of our data, cybercrime, surveillance and other online threats are eroding trust on the Internet. This is why the 2016 edition of our Global Internet Report is dedicated to exploring data breaches, their impact on user trust and their consequences for the global digital economy. These consequences, not surprisingly, can be serious. The purpose of the report is not to emphasize the problem, but to offer solutions and to emphasize the important role that companies and organizations play in building a more trusted Internet. A key question raised by the report is: Why are organisations not taking all available steps to protect the personal information they collect from each of us? The report examines the issues and walks through a number of case studies that highlight the concerns. It ends with a series of five concrete recommendations for actions we need to take. This video provides a preview: width="644" height="362" src="https://www.youtube.com/embed/FxPRGDF-9iY?rel=0&showinfo=0" frameborder="0" allowfullscreen style="margin-bottom:15px;"> We ask you to please read the 2016 GIR, to share the report widely, and to take whatever actions you can to bring about a more trusted Internet. This issue of trust is so serious that we risk undoing all of the progress we have made over the past three decades. It is time we act together to solve it. A version of this post was originally published on the Internet Society blog. Written by Olaf Kolkman, Chief Internet Technology Officer (CITO), Internet SocietyFollow CircleID on TwitterMore under: Policy & Regulation, Privacy, Security, Web [...]
2016-11-22T11:28:00-08:00Broadband Internet Technical Advisory Group (BITAG) today released a report outlining a set of guidelines it believes could dramatically improve the security and privacy of IoT devices and minimize the costs associated with the collateral damage that would otherwise affect both end users and ISPs. The report has also warned that unless manufacturers and distributors of IoT devices improve device security and privacy, consumer backlash may impede the growth of the IoT marketplace and ultimately limit the promise IoT holds. Other observations made in the report include: Insecure Communications: Many of the security functions designed for more general-purpose computing devices are difficult to implement on IoT devices and a number of security flaws have been identified in the field, including unencrypted communications and data leaks from IoT devices. Data Leaks: IoT devices may leak private user data, both from the cloud (where data is stored) and between IoT devices themselves. Potential for Service Disruption: The potential loss of availability or connectivity not only diminishes the functionality of IoT devices, but also may degrade the security of devices in some cases, such as when an IoT device can no longer function without such connectivity (e.g., a home alarm system deactivating if connectivity is lost). Device Replacement May be an Alternative to Software Updates — for Inexpensive or "Disposable" Devices: In some cases, replacing a device entirely may be an alternative to software updates. Certain IoT devices may be so inexpensive that updating software may be impractical or not cost-effective. BITAG Technical Working Group has provided a number of recommendations which including: IoT Devices Should Be Restrictive Rather Than Permissive in Communicating: When possible, devices should not be reachable via inbound connections by default. IoT devices should not rely on the network firewall alone to restrict communication, as some communication between devices within the home may not traverse the firewall. IoT Devices Should Continue to Function if Internet Connectivity is Disrupted: IoT device should be able to perform its primary function or functions (e.g., a light switch or a thermostat should continue to function with manual controls), even if it is not connected to the Internet. IoT Devices Should Continue to Function If the Cloud Back-End Fails: Many services that depend on or use a cloud back-end can continue to function, even if in a degraded or partially functional state when connectivity to the cloud back-end is interrupted or the service itself fails. IoT Devices Should Support Addressing and Naming Best Practices: Many IoT devices may remain deployed for a number of years after they are installed. Supporting the latest protocols such as IPv6 for addressing and naming will ensure that these devices remain functional for years to come. IoT devices should also support the use or validation of DNS Security Extensions (DNSSEC) when domain names are used. The lead editors of were Jason Livingood, Vice President of Technology Policy & Standards at Comcast and Nick Feamster, Professor of Computer Science at Princeton University. Douglas Sicker, Executive Director of BITAG, Chair of BITAG's Technical Working Group, Department Head of Engineering and Public Policy and a professor of Computer Science at Carnegie Mellon University, chaired the review itself. Follow CircleID on TwitterMore under: Cyberattack, Internet of Things, Security [...]