2017-02-27T11:17:00-08:00Today, one of the world's largest Internet companies, Alibaba, together with a compelling array of other providers, vendors, and government bodies for the first time called for a visionary multilateral technical and operational "framework for a Blockchain of Things." The exceptionally thorough and comprehensive 23-page document, SG20-C.008, was submitted into the upcoming ITU-T SG20 Internet of Things (IoT) Study Group meeting at Dubai, 13–23 March — the first group gathering in the organization's new 2017-2020 study period. The action was a welcome step of strategic leadership at the global multilateral level to help accelerate a potentially far-reaching new platform for trusted, distributed identity management to make it a reality and apply it to the Internet of Things ecosystem. As the document's history section notes, scattered related developments have occurred in several other venues and industries. However, the time to scale up the worldwide collaboration was at hand. The Dubai venue was also significant given the UAE's recent actions to establish itself as a global leader in the sector — establishing the Global Blockchain Council and related international conferences. The very thorough document begins by providing: background information, an answer to why IoT needs blockchain, what is blockchain, the challenges, benefits, a gap analysis for blockchain-related standards, and the valuable role for ITU-T. It also underscores that "blockchain is not bitcoin." It then proposes a new work item — "Framework of blockchain of things as decentralized service platform." The proposal has a well-structured, limited scope and outline that includes common characteristics and requirements, a general framework as a decentralized service platform, and finally, an IoT reference model that notably addresses security concerns. Two extensive appendices review other blockchain-related platforms and use cases. Additional supporters are being solicited for the ground-breaking work. This well done contribution — with Alibaba and the supporting parties leading the activity combining collaboration with other venues — should significantly help "in bringing about great promises across a wide range of business applications in many fields, such as finance, banking, healthcare, government, manufacturing, insurance, retail, legal, media and entertainment, supply chain and logistics, finance and accounting, etc." The benefits are compelling. Blockchain offers new ways for [Internet of Things / Smart Cities & Communities] IoT/SC&C data to automate business processes among partners without setting up a complex and expensive centralized IT infrastructure. Data protection of blockchain fosters stronger working relationships with partners and greater efficiency as partners take advantage of the information provided. Making IoT/SC&C and blockchain together enables IoT devices to participate in blockchain transactions. Specifically, IoT devices can send data to public/consortium/private blockchain ledgers for inclusion in shared transactions with distributed records, maintained by consensus, and cryptographically hashed. The distributed replication in blockchain allows business partners to access and supply IoT/SC&C data without the need for central control and management. Additionally, the distributed ledger in a blockchain makes it easier to create cost-efficient business networks where virtually anything of value can be tracked and traded, without requiring a central point of control. Blockchain with IoT/SC&C together becomes a potential game changer by opening the door to invent new styles of digital interactions, enabling IoT devices to participate in blockchain transactions, as well as creating opportunities to reduce the cost and complexity of operating and sustaining business. There is a small IPR bump in the road. It appears that a small New York company called RightClick LLC DBA Blockchain of Things LLC, asserted it began using the work mark "blockchain of things" in commerce on 1/[...]
2017-02-27T07:57:00-08:00The general run of Uniform Domain Name Resolution Policy (UDRP) decisions are unremarkable. At their least, they are primarily instructive in establishing the metes and bounds of lawful registration of domain names. A few decisions stand out for their acuity of reasoning and a few others for their lack of it. The latest candidate of the latter class is NSK LTD. v. Li shuo, FA170100 1712449 (Forum February 16, 2017) (
2017-02-25T15:52:00-08:005G sounds like the successor to 4G cellular telephony, and indeed that is the intent. While the progression from 2G to 3G, to 4G and now 5G seems simple, the story is more nuanced. At CES last month I had a chance to learn more about 5G (not to be confused with the 5Ghz WiFi) as well as another standard, ATSC 3.0 which is supposed to be the next standard for broadcast TV. The contrast between the approach taken with these standards and the way the Internet works offers a pragmatic framework for a deeper understanding of engineering, economics and more. For those who are not technical, 5G sounds like the successor to 4G which is the current, 4th generation, cellular phone system. And indeed, that is the way it is marketed. Similarly, ATSC 3 is presented as the next stage of television. One hint that something is wrong in 5G-land came when I was told that 5G was necessary for IoT. This is a strange claim considering how much we are already doing with connected (IoT or Internet of Things) devices. I'm reminded of past efforts such as IMS (IP Multimedia Systems) from the early 2000's which were deemed necessary in order to support multimedia on the Internet even though voice and video were working fine. Perhaps the IMS advocates had trouble believing multimedia was doing just fine because the Internet doesn't provide the performance guarantees once deemed necessary for speech. Voice over IP (VoIP) works as a byproduct of the capacity created for the web. The innovators of VoIP took advantage of that opportunity rather than depending on guarantees from network engineers. 5G advocates claim that very fast response times (on the order of a few milliseconds) are necessary for autonomous vehicles. Yet the very term autonomous should hint that something is wrong with that notion. I was at the Ford booth, for example, looking at their effort and confirmed that the computing is all local. After all, an autonomous vehicle has to operate even when there is no high-performance connection or, any connection at all. If the car can function without connectivity, then 5G isn't a requirement but rather an optional enhancement. That is something today's Internet already does very well. The problem is not with any particular technical detail but rather the conflict between the tradition of network providers trying to predetermine requirements and the idea of creating opportunity for what we can't anticipate. This conflict isn't obvious because there is a tendency to presuppose services like voice only work because they are built into the network. It is harder to accept the idea VoIP works well because it is not built into the network and thus not limited by the network operators. This is why we can casually do video over the Internet — something that was never economical over the traditional phone network. It is even more confusing because we can add these capabilities at no cost beyond the generic connectivity using software anyone can write without having to make deals with providers. The idea that voice works because of, or despite the fact that the network operators are not helping, is counter-intuitive. It also creates a need to rethink business models that presume the legacy model simple chain of value creation. At the very least we should learn from biology and design systems to have local "intelligence". I put the word intelligence in quotes because this intelligence is not necessarily cognitive but more akin to structures that have co-evolved. Our eyes are a great example — they preprocess our visual information and send hints like line detection. They do not act like cameras sending raw video streams to a central processing system. Local processing is also necessary so systems can act locally. That's just good engineering. So is the ability of the brain to work with the eye to resolve ambiguity as for when we take a second look at something that didn't make sense at first glance. The ATSC 3.0 session at ICCE (IEEE Consumer Electronics [...]
2017-02-25T10:13:00-08:00I've followed Cuba's home-connectivity "plan" from the time it was leaked in 2015 until the recent Havana home Internet trial. I thought the plan was a bad idea when it was leaked — it calls for installation of obsolete DSL (digital subscriber line) technology — and now that the Havana trial is complete, I question whether the plan was real. ETECSA denied the validity of the leaked presentation at the time, and their definition of "broadband" was "at least 256 kb/s." Furthermore, the goal was stated as "Alcanzar para el 2020 que no menos del 50% de los hogares disponga de acceso de Banda Ancha a Internet." My Spanish is not very good, so I am not sure whether the plan was for connectivity in 50% of homes or connectivity being available to 50% of homes. Either way, slow DSL will be a joke in 2020. But, the free home-connectivity trial in Havana used the DSL technology described in the leaked plan — might it be for real? I don't think so. At the end of the free trial, a friend told me that around 700 of the 2,000 of eligible Havana homes agreed to pay to continue the service. He also said that around 12 homes had been connected in Bayamo and the same was going to happen in Santa Clara and Las Tunas. If this home connectivity roll-out has been planned since 2015, why is it going so slowly? Why aren't other parts of Havana open? Why aren't they doing large-scale trials in Bayamo, Santa Clara, and Las Tunas? The quality of a DSL connection is a function of the length and condition of the telephone wire running between a home and the central office serving it. If they had really planned to bring DSL to many Cuban homes, they would have understood the necessity of investing heavily in wiring as well as central office equipment. My guess is that the Havana trial and the installations in Bayamo, Santa Clara and Las Tunas are not part of a national home-connectivity plan, but ends in themselves — interim measures aimed at bringing slow DSL connectivity to small businesses and self-employed people in the most affluent parts of selected cities. That makes more sense to me than a plan to spend a lot of money upgrading copper telephone wires and central office equipment in order to be able to offer obsolete connectivity to 50% of Cuban homes by 2020. (I've always hoped Cuba would leapfrog today's technology, opting for that of the next generation). If the DSL "plan" was never a plan, what might we expect? (The following is highly speculative). My hope is that Cuba regards efforts like home DSL, WiFi hotspots, Street Nets and El Paquete as temporary stopgap measures while waiting for next-generation technology. If that is the case, we might see progress when Raúl Castro steps down next year. Miguel Díaz-Canel Bermúdez, who is expected by many to succeed Castro, acknowledged the inevitability of the Internet in a 2013 talk, saying "today, news from all sides, good and bad, manipulated and true, or half-true, circulates on networks, reaches people — people hear it. The worst thing, then, is silence." (I think Donald Trump may have been in the audience :-). In a later speech Díaz-Canel recognized that the Internet is a social and economic necessity, therefore the government has the responsibility of providing affordable connectivity to every citizen, but there is a caveat — the government must be vigilant in assuring that the citizens use the Internet legally. Here is a clip from that speech. In 1997, the Cuban government decided that the political risk posed by the Internet outweighed its potential benefit and decided to suppress it. At the same time, China opted for an ubiquitous, modern Internet — understanding they could use it as a tool for propaganda and surveillance. It sounds to me like Díaz-Canel has endorsed the Chinese model and will push for next-generation technology with propaganda and surveillance. (Again, my Spanish is not so great, and I may have mischaracterized Díaz-Can[...]
Ericsson, Nokia get go-ahead for LTE-U base stations despite early fears they might interfere with Wi-Fi – Jon Gold reporting in Network World: "The Federal Communications Commission today approved two cellular base stations — one each from Ericsson and Nokia — to use LTE-U, marking the first official government thumbs-up for the controversial technology. ... T-Mobile has already announced that it will be deploying LTE-U technology… Other major tech sector players, including Google, Comcast, and Microsoft, have expressed serious concerns that LTE-U doesn't play as nicely with Wi-Fi as advertised."
Follow CircleID on Twitter
In a joint announcement today, Dutch research institute CWI and Google revealed that they have broken the SHA-1 internet security standard "in practice". Industry cryptographic hash functions such as SHA1 are used for digital signatures and file integrity verification, and protects a wide spectrum of digital assets, including credit card transactions, electronic documents, open-source software repositories and software updates.
— "Today, 10 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision," said the Google Team in a blog post today. "This represents the culmination of two years of research that sprung from a collaboration between the CWI Institute in Amsterdam and Google. ... For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. ... We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure."
— What types of systems are affected? "Any application that relies on SHA-1 for digital signatures, file integrity, or file identification is potentially vulnerable. These include digital certificate signatures, email PGP/GPG signatures, software vendor signatures, software updates, ISO checksums, backup systems, deduplication systems, and GIT." https://shattered.io/
— "This is not a surprise. We've all expected this for over a decade, watching computing power increase. This is why NIST standardized SHA-3 in 2012." Bruce Schneier / Feb 23
Follow CircleID on Twitter
2017-02-23T15:26:01-08:00As I've written before, domain name disputes involving multiple trademarks sometimes raise interesting issues, including whether a panel can order a domain name transferred to one entity without consent of the other. While panels typically have found ways to resolve this issue, one particularly troubling fact pattern arises when a panel denies a complaint simply because a disputed domain name contains trademarks owned by two different entities. The situation presents itself when a panel considers whether a domain name containing two trademarks is "identical or confusingly similar" to a single trademark — that is, the trademark owned by the complainant — as required by the first factor of the Uniform Domain Name Dispute Resolution Policy (UDRP). In one odd case, a UDRP panel confronted the issue when a complaint was filed by the owner of the trademark NSK, but the disputed domain name also contained the trademark SKF — "which is a third-party brand of bearing products which competes with Complainant." Therefore, the panel was faced with the question of whether the domain name
The Republican-controlled FCC on Thursday suspended the net neutrality transparency requirements for broadband providers with fewer than 250,000 subscribers. Grant Gross from IDG News Service reports: "The transparency rule [official FCC release], waived for five years in a 2-1 party-line vote Thursday, requires broadband providers to explain to customers their pricing models and fees as well as their network management practices and the impact on broadband service. The commission had previously exempted ISPs with fewer than 100,000 subscribers, but Thursday's decision expands the number of ISPs not required to inform customers. Only about 20 U.S. ISPs have more than 250,000 subscribers. The five-year waiver may be moot, however."
Follow CircleID on Twitter
2017-02-23T11:35:00-08:00The non-contracted parties of the ICANN community met in Reykjavík last week for their annual intersessional meeting, where at the top of the agenda were calls for more transparency, operational consistency, and procedural fairness in how ICANN ensures contractual compliance. ICANN, as a quasi-private cooperative, derives its legitimacy from its ability to enforce its contracts with domain name registries and registrars. If it fails to implement the policies set by the community and to enforce its agreements with the contracted parties, the very legitimacy and credibility of the multistakeholder governance model would be threatened, and the ability of ICANN to ensure the stability and security of the Domain Name System could be questioned. The Commercial and Non-Commercial Stakeholder Groups are not unified in their views on how ICANN should manage contractual compliance, but both largely agree that ICANN should be more open with the community regarding its internal operating procedures and the decisions that are made. Some members of the Commercial Stakeholder Group desire an Internet policeperson, envisioning ICANN's compliance department as taking an active role in content control, disabling access to an entire website on the mere accusation of copyright infringement. ICANN has previously said it is not a global regulator of Internet content, but there is a sentiment in some circles that through shadow regulation, well-resourced and politically-connected companies should be able to determine which domain names can resolve and which cannot. The Non-Commercial Stakeholder Group believes that the Domain Name System works because Internet users trust it to redirect them to their intended destination. Likewise, if a registrant registers a domain name in good faith, they should expect to be able to use this Internet resource to disseminate the legal speech and expression of their choice. Domain names enable access to knowledge and opinions that sometimes challenge the status quo, but ultimately enable the fundamental human right to dissent and to communicate speech. If a website is hosting illegal content, it is the courts that have the authority to make such a determination and to impose appropriate remedies — not private enterprises that have struck deals with registries, and certainly not ICANN. The problem is, there is mission creep, and ICANN is indirectly regulating content by repossessing domain names from registrants sometimes without any investigation of fact. During the intersessional, the Non-Commercial Stakeholders Group probed the compliance department to outline how complaints can be filed, how they are reviewed, and to describe how the interests of registrants are represented during the investigation of complaints. The answers were very revealing: anyone can file a complaint with ICANN, even anonymously; there are no public procedures on the complaint process; and registrants can neither know that a complaint has been filed against them, nor can they feed into the decision-making process, nor challenge the decision. This is problematic, not least because ICANN staff admitted last November in Hyderabad that there has been abuse of the compliance department's complaints form, with some entities having made bad faith attempts to have domain names taken down. This is not a theoretical issue. In 2015, ICANN's compliance department caused financial harm to a domain name registrant because of a minor, perceived inaccuracy in their domain name's WHOIS records. In this instance, the registrant had a mailing address in Virginia and a phone number with a Tennessee area code. While both details were valid, and the registrant was contactable, a "violent criminal” filed a complaint with ICANN alleging that the details were inaccurate. The complaint was accepted by ICANN and pas[...]
Robert Cannon writes: Over the past year, the National Telecommunications and Information Administration in the Department of Commerce has convened a series of meetings and sought feedback on the policy implications of the Internet of Things. In January, prior to the administration transition, NTIA released a draft working paper Fostering the Advancement of the Internet of Things (also reported here on CircleID). It is unclear how agency work released in January might survive the transition. However, indicating that NTIA's IoT paper is still viable, NTIA under the new administration released a notice extending the comment period on the draft. Comments will now be accepted until March 13, 2017.
Follow CircleID on Twitter
2017-02-23T05:12:00-08:00In recent weeks, you may have seen several articles asking that "ICANN", the Internet Corporation for Assigned Names and Numbers, move more expeditiously to open up the next application window for new gTLDs. As one commenter wrote "Ask a Board member or ICANN staff when they expect the next application window to open, and they will inevitably suggest 2020 — another three years away. Any reasonable person would agree that eight years for a second application window is anything but expeditious, and some might say potentially anti-competitive." Rather than pointing the finger, maybe it's time to turn the question on its head and ask, "what can we do to help move things forward?" As one of the co-chairs of ICANN Policy Development Process working on Subsequent Procedures for the introduction of New gTLDs, I certainly understand the requests to move more quickly. That said, we need to stop asking others, like the ICANN Board, to move in a top-down fashion to start a new process when we are not actively participating in the process to enable that new application window to occur in the ICANN multi-stakeholder bottom-up process. We, the community, actually control our own destiny in this regard. Yes, it has been a number of years since the last round closed. But we, as a community, have all known the milestones that needed to be achieved before the ICANN Board could approve the next application window. Namely, they include completion of the Competition, Consumer Choice and Trust Review (CCT-RT), the ICANN staff implementation review, and the Policy Development Process on Subsequent Procedures. To date, I would argue that ICANN staff are the only ones that have completed their deliverable, the implementation review. The CCT-RT is several months behind schedule, and the PDP on Subsequent Procedures is making good progress. However, like many PDPs, there is certainly a lack of active participation from those that would like to see the process move more quickly. So rather than complaining to the ICANN Board about the speed of the process, please join the PDP on Subsequent Procedures and actively participate. Submit proposals rather than just complaining about things you didn't like. Respond to questions and surveys when they are released. NOTE: Shortly a Community Comment period will open up with a number of questions on improvements that can be made. This is exactly the kind of opportunity that, with plenty of community engagement, could help move things forward, so please respond in a timely manner. In short, please help us help you. If you want things to move more quickly, get involved. Written by Jeff Neuman, Senior Vice President, Valideus USAFollow CircleID on TwitterMore under: ICANN, Top-Level Domains [...]
2017-02-23T05:10:00-08:00Nokia has developed a framework that will enable governments to implement smart cities. The framework is designed to aid regions to design and obtain services for smart city concepts. However, Nokia states that more emphasis needs to be put on developing an overarching strategy rather than small projects. The Australian government announced that they are interested in building smart cities, but there are still major gaps in figuring out how to do so. Nokia Oceania CTO Warren Lemmens said, in an interview for ZDNet, that cities are currently not equipped for the digital future and are being left to solve the problem by themselves. To address the issue, Nokia is suggesting an approach on a government level, where states and territories will work in conjunction with an overarching federal government program, allowing cities to focus on their specific needs. The Concept of a Smart City What Nokia has in mind is quite amazing — their six-point framework is using a horizontal approach. This means that Nokia is developing a horizontal Internet of Things platform (IoT). In short terms, IoT will be used to connect every device together. The platform called IMPACT (Intelligent Management Platform for all Connected Things) will manage every feature of machine-to-machine connections (M2M) for any protocol, any device and across any platform. Nokia's framework will institute one single City Digital Platform for all cities. This platform will help devise a new federal program for innovation that focuses on data. It's a beginning of a more collaborative approach between government, businesses, academia and startups that will cornerstone smart cities. This will assist the public-private partnerships for the improvement of smart cities, eliminating the current tendency to separate device, data and application environments — and ensuring the personalization of each city under the program. What Nokia is trying to do is gather everyone of importance to work together and contribute to turning the smart city concept into a reality, sooner rather than later. Smart Cities in the Future Many of you are probably wondering what will smart city look like in the future. Well if you're thinking about flying cars and teleportation fields, you're going to be disappointed. Smart city will be a hub of information, IoT devices and all kinds of algorithms and scans that will make the city livelier. This won't involve tearing down the old buildings and building them again from scratch. Instead, it will focus on improved urban planning like vertical gardens, new buildings with implemented smart technology and various other gadgets. The point is that a smart city will utilize the digital economy and IoT will be the main distributor of data and information. According to Nokia: What the discussion about smart cities should focus on, should be data. Silos need to be broken down in order to leverage data, so it could be collected and shared between governments and business. This will improve personalization because we'll find out how businesses and citizens use the city. Nokia also said that every city needs a "control center" to collect and utilize this data to drive this personalization. Lemmens said that an operations environment — which consists of three separate layers: application, service, and infrastructure operations, with security "straddling all the operation layers" — should be used in conjunction with this control center. What is Next? Basically, the smart city will focus on personalization through the IoT devices. This is quite exciting, because of the possibilities that come with this concept. It is quite possible that cities will prosper based on personalization, digital economy and startup business wil[...]
2017-02-22T21:30:00-08:00I feel incredibly lucky to work every day with some of the biggest, most recognised and most innovative organisations from around the world on developing strategies for their .brand TLDs. In this capacity, I also have the privilege to meet some of the most knowledgeable and forward-thinking experts in branding, digital marketing, web development and technology, to name a few. One such person is the brilliantly talented Matt Dorville, Content and SEO Strategist for Major League Baseball Advanced Media. Matt is widely regarded as a leading global SEO expert and develops SEO strategy for MLB.com, NHL.com and all 61 clubs within as well as advising on SEO for Major League Baseball Advanced Media partners both domestic and internationally. Matt has some fantastic insights related in particular to recent changes in Google's treatment of vanity URLs, and in particular how using vanity domains within your .brand can supercharge your existing site's SEO ranking. Vanity URLs: no longer a risk Matt explains that many SEO managers have shifted their view on vanity URLs as an SEO strategy due to changes in Google's search algorithm. "In the past, while vanity URLs were frequently seen as a viable strategy for generating quality links and building one's website, there was often conflict with this decision. Many SEO managers tended to request that their sites shy away from vanity URLs, as redirecting each vanity URL redirect resulted in a loss of around 15 percent of the link strength," writes Matt. "However, recent changes to Google combined with the continued global emergence of .brand TLD usage opens up a new strategy that shows great potential. "On July 26th 2016 Google Webmaster Trends Analyst Gary Illyes made the announcement that 30x redirects no longer lose PageRank, which was a significant shift in the underlying algorithm. You can hypothesize that Google did this for a great many reasons, no doubt their ongoing campaign to encourage websites to migrate to https being a large factor in this." It's all about amplification Matt explains that since Google changed its algorithm to no longer penalise 30x redirects in search, vanity URLs redirecting to deeper content within a site could provide a fantastic opportunity for further amplification of links and therefore an increase in SEO strength. Using vanity URLs with your .brand allows: the marketing team to get the URL with the product name they want, the development team to avoid a lot of work getting the consumers there, and SEO to gain strength both on the landing page for the campaign as well as use the link building to strengthen the entire site. "The recent change in 301 redirection is significant news and a vanity URL within a .brand domain should provide excellent benefits in broadcasting on social as well as generating links to the domain through amplification. The simplicity of the vanity domain, most times pairing up the product, action, or campaign with .brand should be able to tie in with marketing to increase SEO strength on both the landing page and the entire site, and generate traffic through organic and social channels." Why is this relevant for .brands? According to Matt, .brand TLDs have even greater potential to capitalise on vanity URL strategies for five main reasons: Direct navigation for customers with simpler URLs that take visitors directly to deep content within a website Flexibility to change where a vanity URL directs as your business or the market changes Global benefits with the ability to geo-locate visitors and send them to the most relevant content No more availability issues as you own the entire namespace and won't spend a fortune acquiring domains Focus on products and campaigns with stronger c[...]
The State Bank of India (SBI) has announced it will be switching its domain name from "sbi.co.in” to the branded "bank.sbi", according to various news sources. SBI is the first banking organization in India to move its online presence under a new gTLD. With the switch to the branded TLD, SBI has said it aims to simplify the digital experience of customers and bring in enhanced security against phishing and lookalike websites. "SBI being the largest bank has always been the pioneer in adapting new technology. SBI has always believed in providing high-tech yet secure internet experience to its customers. Bank's own gTLD is another step in this direction," SBI's Chairman Arundhati Bhattacharya said in a statement.
Follow CircleID on Twitter
"Three years after hackers used a spearphishing attack to successfully gain access to internal data at the Internet Corporation for Assigned Names and Numbers (ICANN), the data is still being passed around and sold on black markets for $300, complete with claims that it’s never been leaked before," reports Patrick O'Neill in CyberScoop. "The 2014 breach allowed hackers to take ICANN’s internal emails and wiki, its administrative data files, its blog and the Whois portal. ICANN, which has been the target of many cyberattacks over the years, possesses much more critical information due to its day-to-day management of top-level domains ... The fact that nothing else slipped out is a testament to good security. But even a little data from such an important organization has black-market value for years."
Follow CircleID on Twitter
Michael "Mick" Moran, assistant director of INTERPOL's Vulnerable Communities Unit, was honored at the 39th general meeting of the Messaging, Malware and Mobile Anti-Abuse Working Group for his personal commitment to this challenging work and for fostering international cooperation to fight online exploitation. Moran, who has helped rescue thousands of child abuse material victims since he started working in the field in 1997, challenged the internet industry to do more to protect innocent children as he received the 2017 M3AAWG Mary Litynski Award.
The M3AAWG Mary Litynski Award recognizes the life-time achievements of a person whose work has significantly contributed to the safety of the online community. In his acceptance presentation and in a video for the M3AAWG YouTube channel, Moran outlined some of the changing strategies in battling child abuse materials and offered suggestions on how the industry can better safeguard its networks.
Follow CircleID on Twitter
Distributed Denial-of-Service (DDoS) attacks will become larger in scale, harder to mitigate and more frequent, says Deloitte in its annual Global Predictions 2017 report. It predicts "there will be on average a Tbit/s (terabit per second) attack per month, over 10 million attacks in total, and an average attack size of between 1.25 and 1.5 Gbit/s (gigabit per second) of junk data being sent. An unmitigated Gbit/s attack (one whose impact was not contained), would be sufficient to take many organizations offline."
— Anticipated escalation in DDoS threat is based on three concurrent trends: the growing installed base of insecure Internet of Things (IoT) devices; the online availability of malware methodologies, such as Mirai, which allow relatively unskilled attackers to corral insecure IoT devices and use them to launch attacks; and the availability of ever higher bandwidth speeds.
— Entities that should remain particularly alert, according to the report, include: retailers with a high share of online revenues; online video games companies; video streaming services; online business and service delivery companies (financial services, professional services); and government online services (for example, tax collection).
The report also shares a range of options that companies and governments should consider to mitigate the impacts of DDoS attacks – they include: decentralizing, bandwidth oversubscription, testing, dynamic defense among others. (Full report available here)
Follow CircleID on Twitter
2017-02-20T16:39:00-08:00The other day several of us were gathered in a conference room on the 17th floor of the LinkedIn building in San Francisco, looking out of the windows as we discussed some various technical matters. All around us, there were new buildings under construction, with that tall towering crane anchored to the building in several places. We wondered how that crane was built, and considered how precise the building process seemed to be to the complete mess building a network seems to be. And then, this week, I ran across a couple of articles (Feb 14 & Feb 15) arguing that we need a new Internet. For instance, from Feb 14 post: What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn't designed with healthcare, transport or energy grids in mind, to the extent it was 'designed' at all. Every "circle of death" watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future. So the Internet is broken. Completely. We need a new one. Really? First, I'd like to point out that much of what people complain about in terms of the Internet, such as the lack of security, or the lack of privacy, are actually a matter of tradeoffs. You could choose a different set of tradeoffs, of course, but then you would get a different "Internet" — one that may not, in fact, support what we support today. Whether the things it would support would be better or worse, I cannot answer, but the entire concept of a "new Internet" that supports everything we want it to support in a way that has none of the flaws of the current one, and no new flaws we have not thought about before — this is simply impossible. So lets leave that idea aside, and think about some of the other complaints. The Internet is not secure. Well, of course not. But that does not mean it needs to be this way. The reality is that security is a hot potato that application developers, network operators, and end users like to throw at one another, rather than something anyone tries to fix. Rather than considering each piece of the security puzzle, and thinking about how and where it might be best solved, application developers just build applications without security at all, and say "let the network fix it." At the same time, network engineers say either: "sure, I can give you perfect security, let me just install this firewall," or "I don't have anything to do with security, fix that in the application." On the other end, users choose really horrible passwords, and blame the network for losing their credit card number, or say "just let my use my thumbprint," without ever wondering where they are going to go to get a new one when their thumbprint has been compromised. Is this "fixable?" sure, for some strong measure of security — but a "new Internet" isn't going to fare any better than the current one unless people start talking to one another. The Internet cannot sc[...]
2017-02-20T13:24:00-08:00Co-authored by Leslie Daigle, Konstantinos Komaitis, and Phil Roberts. The incredible pace of change of the Internet — from research laboratory inception to global telecommunication necessity — is due to the continuing pursuit, development and deployment of technology and practices adopted to make the Internet better. This has required continuous attention to a wide variety of problems ranging from "simple" to so-called "wicked problems". Problems in the latter category have been addressed through collaboration. This post outlines key characteristics of successful collaboration activities (download PDF version). Problem difficulty and solution approaches Wikipedia offers a definition of "wicked problems” [accessed September 16, 2016]: "A wicked problem is a problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. The use of the term "wicked" here has come to denote resistance to resolution, rather than evil [.] Moreover, because of complex interdependencies, the effort to solve one aspect of a wicked problem may reveal or create other problems." Of course, not all large problems are wicked. As noted in the Internet Society's commentary on Collaborative Stewardship, sometimes an Internet problem has a known answer and the challenge is to foster awareness and uptake of that known solution. Denning and Dunham characterize innovation challenges as simple, complex, or wicked [see Denning, Peter J. and Robert Dunham "The Innovator's Way – Essential Practices for Successful Innovation" page 315]. In the Internet context, the characteristics and approaches to addressing them can be summarized as follows: TypeCharacteristicsSolution pathSimpleSolutions, or design approaches for solutions, are knownCooperation: Awareness-raising and information sharing — typically through Network Operator GroupsComplexNo known solution exists, the problem spans multiple parts of the InternetConsensus: Open, consensus-based standards developmentWickedNo solution exists in any domain, general lack of agreement on existence or characterization of the problemCollaboration: moving beyond existing domain and organization boundaries and set processes for determining problems and solutions Why Internet problems are often wicked First, it is important to understand that, today, the Internet is largely composed of private networks. Individual participants, corporations or otherwise, must have a valid business reason for the adoption of a certain technology or practice in their own network. This does not necessarily rise to the level of a quantifiable business case, but they have to have some valid reason that it helps them make something better in their own networks or experience of the Internet. However, if the practice is a behavior on the network that is impacted by, or includes other networks, the participants must have a standard they agree to. This might be a protocol standard governing bits on the wire and the exchange of communication, or a common practice. To get to that level of agreement, participants — whether private companies with financial stakes in the situation, or governments, or individuals — must be disposed and willing to collaborate with others to instantiate the adoption. Addressing wicked Internet problems: Keys to successful collaboration We identify here four important characteristics of collaborative activities that have driven the success and innovation of the In[...]