Subscribe: CircleID
http://www.circleid.com/rss/rss_all/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
based  circleid twittermore  cities  data  domain  gdpr  icann  internet  network  new  service  services  software  whois 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2018-02-16T13:10:00-08:00

 



SpaceX Launching Two Experimental Internet Satellites This Weekend

2018-02-16T13:10:00-08:00

On Saturday, SpaceX will be launching two experimental mini-satellites that will pave the path for the first batch of what is planned to be a 4,000-satellite constellation providing low-cost internet around the earth. George Dvorsky reporting in Gizmodo: "Announced back in 2015, Starlink is designed to be a massive, space-based telecommunications network consisting of thousands of interlinked satellites and several geographically dispersed ground stations. ... The plan is to have a global internet service in place by the mid-2020s, and get a leg-up on potential competitors. ... Two prototypes, named Microsat 2a and 2b, are now packed and ready for launch atop a Falcon-9 v1.2 rocket."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless




A Brooklyn Bitcoin Mining Operation is Causing Interference to T-Mobile's Broadband Network

2018-02-16T10:53:00-08:00

(image) AntMiner S5 Bitcoin Miner by Bitmain released in 2014. S5 has since been surpassed by newer models.The Federal Communications Commission on Thursday sent a letter to an individual in Brooklyn, New York, alleging that a device in the individual's residence used to mine Bitcoin is generating spurious radiofrequency emissions, causing interference to a portion of T-Mobile's mobile telephone and broadband network. The letter states the FCC received a complaint from T-Mobile concerning interference to its 700 MHz LTE network in Brooklyn, New York. In response to the complaint, agents from the Enforcement Bureau's New York Office confirmed by using direction finding techniques that radio emissions in the 700 MHz band were, in fact, emanating from the user's residence in Brooklyn. "When the interfering device was turned off the interference ceased. ... The device was generating spurious emissions on frequencies assigned to T-Mobile's broadband network and causing harmful interference." FCC's warning letter further states that user's "Antminer s5 Bitcoin Miner" operation constitutes a violation of the Federal laws and could subject the operator to severe penalties including substantial monetary fines and arrest.

Jessica Rosenworcel, FCC Commissioner, in a tweet said: "Okay, this @FCC letter has it all: #bitcoin mining, computing power needed for #blockchain computation and #wireless #broadband interference. It all seems so very 2018."

Follow CircleID on Twitter

More under: Access Providers, Blockchain, Broadband, Telecom, Wireless




Hackers Earned Over $100K in 20 Days Through Hack the Air Force 2.0

2018-02-16T07:47:01-08:00

(image) The participating U.S. Airmen and hackers at the conclusion of h1-212 in New York City on Dec 9, 2017

HackerOne has announced the results of the second Hack the Air Force bug bounty challenge which invited trusted hackers from all over the world to participate in its second bug bounty challenge in less than a year. The 20-day bug bounty challenge was the most inclusive government program to-date, with 26 countries invited to participate. From the report: "Hack the Air Force 2.0 is part of the Department of Defense's (DoD) Hack the Pentagon crowd-sourced security initiative. Twenty-seven trusted hackers successfully participated in the Hack the Air Force bug bounty challenge — reporting 106 valid vulnerabilities and earning $103,883. Hackers from the U.S., Canada, United Kingdom, Sweden, Netherlands, Belgium, and Latvia participated in the challenge. The Air Force awarded hackers the highest single bounty award of any Federal program to-date, $12,500."

Follow CircleID on Twitter

More under: Cybersecurity




WHOIS Inaccuracy Could Mean Noncompliance with GDPR

2018-02-15T12:41:00-08:00

The European Commission recently released technical input on ICANN's proposed GDPR-compliant WHOIS models that underscores the GDPR's "Accuracy" principle — making clear that reasonable steps should be taken to ensure the accuracy of any personal data obtained for WHOIS databases and that ICANN should be sure to incorporate this requirement in whatever model it adopts. Contracted parties concerned with GDPR compliance should take note. According to Article 5 of the regulation, personal data shall be "accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay." This standard is critical for maintaining properly functioning WHOIS databases and would be a significant improvement over today's insufficient standard of WHOIS accuracy. Indeed, European Union-based country code TLDs require rigorous validation and verification, much more in line with GDPR requirements — a standard to strive for. The stage is set for an upgrade to WHOIS accuracy: ICANN's current approach to WHOIS accuracy simply does not comply with GDPR. Any model selected by ICANN to comply with GDPR must be accompanied by new processes to validate and verify the contact information contained in the WHOIS database. Unfortunately, the current Registrar Accreditation Agreement, which includes detailed provisions requiring registrars to validate and verify registrant data, does not go far enough to meet these requirements. At a minimum, ICANN should expedite the implementation of cross-field validation as required by the 2013 RAA, but to date has not been enforced. These activities should be supplemented by examining other forms of validation, building on ICANN's experience in developing the WHOIS Accuracy Reporting System (ARS), which examines accuracy of contact information from the perspective of syntactical and operational validity. Also, validation and accuracy of WHOIS data has been a long-discussed matter within the ICANN community — with the 2014 Final Report from the Expert Working Group on gTLD Directory Services: A Next-Generation Registration Directory Service (RDS) devoting an entire chapter to "Improving Data Quality" with a recommendation for more robust validation of registrant data. And, not insignificantly, ICANN already has investigated and deployed validation systems in its operations, including those in use by its Compliance department to investigate accuracy complaints. Despite its significance to the protection and usefulness of WHOIS data, the accuracy principle is surprisingly absent from the three WHOIS models presented by ICANN for discussion among relevant stakeholders. Regardless of which model is ultimately selected, the accuracy principle must be applied to any WHOIS data processing activity in a manner that addresses GDPR compliance — both at inception, when a domain is registered, and later, when data is out of date. All stakeholders can agree that WHOIS data is a valuable resource for industry, public services, researchers, and individual Internet users. Aside from the GDPR "Accuracy" principle, taking steps to protect the confidentiality of this resource would be meaningless if the data itself were not accurate or complete. Written by Fabricio Vayra, Partner at Perkins Coie LLPFollow CircleID on TwitterMore under: Domain Names, ICANN, Privacy, Whois [...]



Who Will Crack Cloud Application Access SLAs?

2018-02-14T12:14:01-08:00

The chart below ought to be in every basic undergraduate textbook on packet networking and distributed computing. That it is absent says much about our technical maturity level as an industry. But before we look at what it means, let's go back to some basics. When you deliver a utility service like water or gas, there's a unit for metering its supply. The electricity wattage consumed by a room is the sum of the wattage of the individual appliances. The house consumption is the sum of the rooms, the neighbourhood is the sum of the houses, and so on. Likewise, we can add up the demand for water, using litres. These resource units "add up" in a meaningful way. We can express a service level agreement (SLA) for utility service delivery in that standard unit in an unambiguous way. This allows us to agree both the final end-user delivery, as well as to contract supply at any administrative or management boundaries in the delivery network. What's really weird about the broadband industry is that we've not yet got a standard metric of supply and demand that "adds up." What's even more peculiar is that people don't even seem to be aware of its absence, or feel the urge to look for one. What's absolutely bizarre is that it's hard to get people interested even when you do finally find a really good one! Picking the right "unit" is hard because telecoms is different to power and water in a crucial way. With these physical utilities, we want more of something valuable. Broadband is an information utility, where we want less of something unwanted: latency (and in extremis, loss). That is a tricky conceptual about-turn. So we're selling the absence of something, not its presence. It's kind of asking "how much network latency mess-up can we deal with in order to deliver a tolerable level of application QoE screw-up”. Ideally, we'd like zero "mess-up" and "screw-up," but that's not on offer. And no, I don't expect ISPs to begin advertising "a bit less screwed-up than the competition" anytime soon to consumers! The above chart breaks down the latency into its independent constituent parts. What it says is: For any network (sub-)path, the latency comprises (G)eographic, packet (S)ize, and (V)ariable contention delay — the "vertical" (de)composition. Along the "horizontal" path the "Gs", "Ss", and "Vs" all "add up". (They are probabilities, not simple scalars, but it's still just ordinary algebra.) You can "add up" the complete end-to-end path "mess-up" by breaking each sub-path "mess-up" into G, S and V; then adding the Gs, Ss, and Vs "horizontally"; and then "vertically" recombining their "total mess-up" (again, all using probability functions to reflect we are dealing with randomness). And that's it! We've now got a mathematics of latency which "adds up", just like wattage or litres. It's not proprietary, nobody holds a patent on it, everyone can use it. Any network equipment or monitoring enterprise with a standard level of competence can implement it as their network resource model. It's all documented in the open. This may all seem a bit like science arcana, but it has real business implications. Adjust your retirement portfolio accordingly! Because it's really handy to have a collection of network SLAs that "add up" to a working telco service or SaaS application. In order to do that, you need to express them in a unit that "adds up". In theory, big telcos are involved in a "digital transformation" from commodity "pipes" into cloud service integration companies. With the occasional honourable exception (you know who you are!), there doesn't seem to be much appetite for engaging with fundamental science and engineering. Most major telcos are technological husks that do vendor contract management, spectrum hoarding, and regulatory guerrilla warfare, with a bit of football marketing on the side. In contrast, the giant cloud companies (like Amazon and Google) are thronged with PhDs thinking about[...]



Donuts Acquires .TRAVEL TLD

2018-02-14T11:14:00-08:00

Donuts Inc. today announced it has acquired the .TRAVEL domain name from registry operator Tralliance Registry Management Company; the .TRAVEL domain becomes Donuts' 239th TLD. From the annoucement: "Since its launch in 2005, the .TRAVEL domain has been embraced by the travel industry. Domain names ending in .TRAVEL now identify tens of thousands of travel businesses and organizations on the Internet. The .TRAVEL domain is widely recognized as of the highest quality, and is used by leading travel businesses such as: visitloscabos.travel, adventures.travel, hongkongdisneyland.travel, goldman.travel, AARP.travel and tens of thousands of others."

Follow CircleID on Twitter

More under: Domain Names, Registry Services, New TLDs




GDPR - Territorial Scope and the Need to Avoid Absurd and Inconsistent Results

2018-02-14T09:54:00-08:00

It's not just establishment it's context! There is an urgent need to clarify the GDPR's territorial scope. Of the many changes the GDPR will usher in this May, the expansion of EU privacy law's territorial scope is one of the most important. The GDPR provides for broad application of its provisions both within the EU and globally. But the fact that the GDPR has a broad territorial scope does not mean that every company, or all data processing activities, are subject to it. Rather, the GDPR puts important limitations on its territorial scope that must be acknowledged and correctly analyzed by those interpreting the regulation for the global business community. Otherwise, it could lead to absurd implementation and bad policy which no one wants. EU Establishment In essence: Where registrars are established in the EU, the registrars' use and processing of personal data is subject to the GDPR. That is no surprise to anyone. Where registrars have no establishment in the EU, but offer domain name registration services to data subjects in the EU, the processing of personal data in the context of such offer will also be subject to the GDPR. Again no surprise and logical. However, where a registrar is based outside the EU, without an establishment in the EU, and uses a processor in the EU, such non-EU based registrar (as a controller) will not be subject to the GDPR due to the EU based processor's establishment in the EU. The GDPR only applies to the controller according to Article 3 (1) GDPR where the processor in the EU would be considered the controller's establishment. If the controller uses an external service provider (no group company), this processor will generally not be considered an establishment of the controller. It would only be caught by GDPR if the processing is done "in the context" of that establishment. That is the key, and I'll discuss an example of potentially absurd results if this is not interpreted correctly. NB All obligations directly applicable to the processor under the GDPR will, of course, apply to the EU based processor. WHOIS If we look at the example of WHOIS (searchable registries of domain name holders) where there is presently much debate amongst the many and varied actors in the domain name industry over whether public WHOIS databases can remain public under the GDPR. The second part of ICANN's independent assessment of this issue offered an analysis of the GDPR's territorial reach that deserves closer scrutiny. Addressing the territorial limits of the law, the authors state: "Therefore, all processing of personal data is, no matter where it is carried out, within the territorial scope of the GDPR as long as the controller or processor is considered established within the EU; the nationality, citizenship or location of the data subject is irrelevant." In other words, the authors conclude that as long as a controller or processor has an "establishment" in the EU, all processing of personal data it undertakes, regardless of the location or nationality of the data subject and regardless of whether the processing has any nexus to the EU, is subject to the GDPR. This is wrong. The analysis overlooks key language of the GDPR. Under Article 3.1, the law applies not to any processing that is done by a company that happens to have an establishment in the EU, but to processing done "in the context of" that establishment. This distinction makes a difference. Imagine, for example, a Canadian company that has an office in Paris. Under the authors' analysis, the GDPR would apply to all processing done by that company simply by virtue of it having a Paris office, whether the data subjects interacting with it were French, Canadian, or even American, whether they accessed the company's services from France, Canada, or the U.S., and even if all the processing occurred outside of the EU. This would be an absurd result inconsistent with th[...]



The Future of .COM Pricing

2018-02-13T08:59:00-08:00

When you've been around the domain industry for as long as I have, you start to lose track of time. I was reminded late last year that the 6-year agreement Verisign struck with ICANN in 2012 to operate .com will be up for expiration in November of this year. Now, I don't for a second believe that .com will be operated by any other party, as Verisign's contract does give them the presumptive right of renewal. But what will be interesting to watch is what happens to Verisign's ability to increase the wholesale cost of .com names. The 2012 agreement actually afforded Verisign the ability to increase prices by 7%, up to four times over the 6-year course of the contract. However, when the US Commerce Department approved the agreement, it did so without the ability for Verisign to implement those price increases. At that time, the wholesale price of a .com domain was $7.85, and that's where it stands today with the prices to registrars being frozen. Under the terms of the original 2012 agreement, .com prices could have been as high as $10.26 today had Verisign taken advantage of their price increases. As an aside, I've long thought that the price of a single .com domain was incredibly inexpensive when you think about it in comparison to other costs of running a business. While I don't have any concrete insight into whether the price freeze will continue, there is obviously a new administration in Washington DC. Their view on this agreement could be different than the previous administration. Since this administration has come into office, we have seen a number of pro-business initiatives undertaken, so perhaps that will carry over to the Verisign agreement as well. Another big difference today is that the domain market, in general, is vastly different than it was in 2012 — with the introduction of hundreds of new gTLDs. There are exponentially more alternatives to .com today than there were 6 years ago, so it's possible that too will have an impact on the decision. With over 131 million registered .com names, it will be interesting to see how a potential increase of a few dollars per name would play out in the market, and the impact that it would have on corporate domain portfolios which are still largely comprised of .com names. Written by Matt Serlin, SVP, Client Services and Operations at BrandsightFollow CircleID on TwitterMore under: Domain Names, ICANN [...]



Why Is It So Hard to Run a Bitcoin Exchange?

2018-02-13T08:42:00-08:00

One of the chronic features of the Bitcoin landscape is that Bitcoin exchanges screw up and fail, starting with Mt. Gox. There's nothing conceptually very hard about running an exchange, so what's the problem? The first problem is that Bitcoin and other blockchains are by design completely unforgiving. If there is a bug in your software which lets people steal coins, too bad, nothing to be done. Some environments need software that has to be perfect, or as close as we can make it, such as space probes that have to run for years or decades, and implanted medical devices where a bug could kill the patient. Programmers have software design techniques for those environments, but they generally start with a clear model of what the environment is and what sort of threats the device will have to face. Then they write and test the code as completely as they can, and burn it into a read-only memory in the device, which prevents deliberate or accidental later changes to the code. Running an online cryptocurrency exchange is about as far from that model as one can imagine. The exchange's web site faces the Internet where one can expect non-stop hostile attacks using rapidly evolving techniques. The software that runs the web site and the databases is ordinary server stuff, reasonably good quality, but way too big and way too dynamic to allow the sorts of techniques that space probes use. Nonetheless, there are plenty of ways to try and make an exchange secure. A bitcoin exchange receives bitcoins and money from its customers, who then trade one for the other, and later ask for the results of the trade back. The bitcoins and money that the customers have sent stay in inventory until they're returned to the customers. If the exchange closes its books once a day, at that point the bitcoins in inventory (which are public since the bitcoin ledger is public) should match the amount the customers have sent minus the amount returned. Similarly, the amount in the exchange's bank account should match the net cash sent. The thing in the middle is a black hole since with most bitcoin exchanges you have no idea where your bitcoins or cash have gone until you get them back, or sometimes you don't. To make it hard to steal the bitcoins, an exchange might keep the inventory in a cold wallet, one where the private key needed to sign transactions is not on any computer connected to the Internet. Once a day they might burn a list of bitcoin withdrawals onto a CD, take the CD into a vault where there's a computer with the private wallet key, create and sign the withdrawal transactions, and burn them onto another CD, leave the computer, the first CD, and a copy of the second CD in the vault, and take the second CD to an online computer that can send out the transactions. They could do something similar for cash withdrawals, with a bank that required a cryptographic signature with a key stored on an offline computer for withdrawal instructions. None of this is exotic, and while it wouldn't make anything fraud-proof, it'd at least be possible to audit what's happening and have a daily check of whether the money and bitcoins are where they are supposed to be. But when I read about the endless stories of crooks breaking into exchanges and stealing cryptocurrencies from hot (online) wallets, it's painfully clear that the exchanges, at least the ones that got hacked, don't do even this sort of simple stuff. Admittedly, this would slow things down. If there's one CD burned per day, you can only withdraw your money or bitcoins once per day. Personally, I think that's entirely reasonable — my stockbroker takes two days to transfer cash and longer than that to transfer securities, but we all seem to manage. Written by John Levine, Author, Consultant & SpeakerFollow CircleID on TwitterMore under: Blockchain, Cyberattack, Cybe[...]



Will 5G Trigger Smart City PPP Collaboration?

2018-02-13T08:18:00-08:00

As discussed in previous analyses, the arrival of 5G will trigger a totally new development in telecommunications. Not just in relation to better broadband services on mobile phones — it will also generate opportunities for a range of IoT (internet of things) developments that among other projects are grouped together under smart cities (feel free to read 'digital' or 'connected cities'). The problems related to the development 5G infrastructure as well as to smart cities offer a great opportunity to develop new business models for both telecommunications companies as well as for cities and communities, to create win-win situations. 5G will require a massive increase in the number of infrastructure access points in mobile networks; many more towers and antennas will need to be installed by the telecommunications companies to deliver the wide range of services that are becoming available through this new technology. Furthermore, all the access points need to be connected to a fibre optic network to manage the capacity and the quality needed for the many broadband services that will be carried over it. This is ideal network structure for cities which require a very dense level of connectivity, but cities don't have the funds to make that happen. So telecommunications companies working together with cities could be a win-win situation. Cities that do have a holistic and integrated smart city strategy in place can take a leadership role by developing the requirements needed for a city-wide digital infrastructure that can provide the social and economic benefits for its citizens. The critical element of an integrated strategy is that it must cut through the traditional bureaucratic silo structures. 5G is an ideal technology for a range of city-based IoT services in relation to energy, environment, sustainability, mobility, healthcare, etc. Mobile network infrastructure (incl 5G) will generally follow the main arteries and hotspots of the city, where there at the same time is usually a range of city- and utilities-based infrastructure that can be used for 5G co-location. IoT is also seen by the operators as a new way to move up the value chain. But if we are looking at 5G as potential digital infrastructure for smart cities, the cities infrastructure requirements will need to be discussed upfront with the network operators who are interested in building 5G networks. By working with the cities, these operators instantly get so-called anchor tenants for their network, which will help them to develop the viable business and investments models needed for such a network. The wrong strategy would be put the requirements of the telecommunications before that of the cities. The development of 5G will take a decade or so (2020-2030), and it is obvious that cities that already have their strategic (holistic) smart city plans ready are in a prime position to sit down with the operators; and they will be among the first who will be able to develop connected cities for their people. This will, of course, create enormous benefits and will attract new citizens and new businesses, especially those who understand the advantages of living or being situated in such a digital place. MVNOs (mobile virtual network operators) are another potential winner in this development — they could specialise in what is needed to create a smart city, smart community, smart precinct, etc. Telecommunication companies AT&T and Verizon in the USA clearly see the opportunities to work with cities, however, this is mainly based on getting easy access to valuable city real estate to install thousands of new antennas rather than looking at this infrastructure development from a city perspective. There is even some bullying involved by threatening that cities will be left behind if they don't ju[...]



Suggestions for the Cuba Internet Task Force

2018-02-13T07:18:00-08:00

The Cuba Internet Task Force (CITF) held their inaugural meeting last week. Deputy Assistant Secretary for Western Hemisphere Affairs John S. Creamer will chair the CITF, and there are government representatives from the Department of State, Office of Cuba Broadcasting, Federal Communications Commission, National Telecommunications and Information Administration and Agency for International Development. Freedom House will represent NGOs and the Information Technology Industry Council will represent the IT industry. They agreed to form two subcommittees — one to explore the role of media and freedom of information in Cuba and one to explore Internet access. The subcommittees are to provide preliminary reports of recommendations within six months, and the CITF will reconvene in October to review those preliminary reports and prepare a final report with recommendations for the Secretary of State and the President. They are soliciting public comments, looking for volunteers for service on the subcommittees and have established a Web site. I may be wrong, but it sounds like the subcommittees will be doing much of the actual work. The subcommittee on technological challenges to Internet access will include US technology firms and industry representatives and the subcommittee on media and freedom of information will include NGOs and program implementers with a focus on activities that encourage freedom of expression in Cuba through independent media and Internet freedom. They aim to maintain balance by including members from industry, academia and legal, labor, or other professionals. I hope the subcommittee on media and Internet freedom resists proposals for clandestine programs. Those that have failed in the past have provided the Cuban government with an excuse for repression and cost the United States money and prestige. Both the Cuban and United States governments have overstated what their impact would have been had they succeeded. Cuba's current Wi-Fi hotspots, navigation rooms, home DSL and 3G mobile are stopgap efforts based on obsolete technology, and they provide inferior Internet access to a limited number of people. (El Paquete Semanal is the most important substitute for a modern Internet in Cuba today). It would be difficult for the subcommittee on technological challenges to devise plans or offer support for activities the current Cuban government would allow and be able to afford. That being said, the situation may ease somewhat after Raúl Castro steps down in April. Are there short-run steps Cuba would be willing to take that we could assist them with? For example, the next Cuban government might be willing to consider legitimizing and assisting some citizen-implemented stopgap measures like street nets, rural community networks, geostationary satellite service and LANs in schools and other organizations. They might also be willing to accept educational material and services like access to online material from Coursera or LAN-based courseware from MIT or The Khan Academy. (At the time of President Obama's visit, Cisco and the Universidad de las Ciencias Informaticas promised to cooperate in bringing the Cisco Network Academy to Cuba, but, as far as I know, that has not happened). The US requires Coursera and other companies to block Cuban access to their services. That is a policy we could reverse unilaterally, without the permission of the Cuban government. Google is the only US Internet company that has established a relationship with and been allowed to install infrastructure in Cuba. The next Cuban administration might be willing to trust them as partners in infrastructure projects like, for example, providing wholesale fiber service or establishing a YouTube production space in Havana. Cuba could also serve [...]



Automation for Physical Devices: the Holy Grail of Service Provisioning

2018-02-13T06:19:00-08:00

Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) are finally starting to pick up momentum. In the process, it is becoming clear that they are not the silver bullet originally advertised to be. While great for some use cases, emerging technologies like SDN and NFV have been primarily designed for virtual greenfield environments. Yet large service providers continue to run tons of physical network devices that are still managed manually. Based on discussions with senior executives at various service providers, the industry is gearing towards service agility and minimizing Operating Expenses (OPEX) through automation. But as fully automated workflows typically involve also physical network devices at select phases of the process, most network infrastructure vendors have been unable to go the whole nine yards together with their clients. One of the obvious reasons why carriers have been hesitant to embrace automation is that any automated process is only as strong as its weakest link. By having to resort to manual steps towards the end of the process, the service agility suffers. But perhaps even more importantly, partial automation abilities will diminish OPEX savings and limit the number of possible business cases. This is why automation for physical network devices is becoming the holy grail of service provisioning. Enter Ansible – the Network Robot Traditional orchestrators such as Chef, Puppet and Jenkins require physical agents to be installed on the managed devices. For large service providers with tens of thousands of devices to manage, this model is simply not practical. But over the last six months, the traditional approach has started giving way to agentless orchestration based on standard protocols such as SSH and SCP. Pioneered by Red Hat with its Ansible network module, service providers are now able to weave the management of physical devices into their lifecycle orchestration models. For practical purposes, this is almost like placing a robot onto a network technician's seat, ensuring that changes to physical network devices are carried out automatically. Because Ansible is an open source solution backed up by nearly every major vendor in the industry, the breadth of the ecosystem also enables valuable multi-vendor scenarios. This is important because it allows automated processes to run all the way from cloud portals to the physical devices on the ground. Given some time, this will be nothing less than revolutionary in unleashing the digital transformation. Spreadsheets that Choked the Robot The curious thing about network management is that there are typically no sophisticated solutions in place for managing VLAN spaces, Virtual Routing Functions (VRFs) and their connections with logical networks. Instead, the most common tool used for this purpose is a humble spreadsheet. Considering that automated management of physical network devices relies heavily on assigning suitable VLANs, networks, and device-specific configuration parameters, the last manual hurdle for automated network services is the spreadsheet used to manage them. Without a backend from which to query all these properties, initiatives aimed at end-to-end automation are likely to hit a wall. To eliminate the spreadsheets that choke the network robots, orchestrators need a single backend they can use to obtain all network-related data needed to configure devices. Here is a simple three-step methodology for unleashing the network robot: 1) Merge the entire network structure including logical networks, VLAN spaces and VRFs into a unified management system. This backend should provide all orchestrators with a simple REST-based API from which they can query free network resources and device-spec[...]



UK's Government Websites Infected by Cryptocurrency Mining Malware

2018-02-12T12:57:00-08:00

Thousands of websites are reported to have been infected by malware over the weekend forcing visitors' computers to mine cryptocurrency while using the sites. The affected websites include UK's National Health Service (NHS), the Student Loans Company and several English councils. Patrick Greenfield reporting in the Guardian: "The cryptojacking script was inserted into website codes through BrowseAloud, a popular plugin that helps blind and partially-sighted people access the web. More than 5,000 websites have been flooded by the malware. Software known as Coinhive, which quietly uses the processing power of a user's device to mine open source cryptocurrency Monero, appears to have been injected into the compromised BrowseAloud plugin."

Follow CircleID on Twitter

More under: Blockchain, Cyberattack, Cybercrime, Malware




IDC Predicts Blockchain Spending in the Middle East and Africa to More than Double in 2018

2018-02-12T12:16:00-08:00

Spending on blockchain solutions in the Middle East and Africa (MEA) will more than double this year, according to the latest insights from International Data Corporation. Megha Kumar, IDC's research director for software in the Middle East, Africa, and Turkey: "There is clearly an immense amount of interest around distributed ledger technologies (DLT) in the region. This is being driven by the pressing need for organizations to improve their efficiency, agility, security, and integrity. In 2018, we expect more organizations across MEA to move beyond the evaluation and proof-of-concept phase to pilots and even deployments."

"IDC expects blockchain spending in MEA to reach $307 million in 2021, which represents a compound annual growth rate (CAGR) of 77.4% for the 2016-2021 period. While various industries are evaluating the use of blockchain, IDC research suggests the region's public sector (including government, education, and healthcare) will spend an estimated $120.8 million in this space in 2021, accounting for 39.2% share. It will be followed by the financial services sector at 35.5% and the distribution and services sector at 14.1%."

"From a technology perspective, IDC's forecast shows services (IT services and business services) accounting for 52.7% of MEA blockchain spending in 2021. Blockchain software platforms will be the biggest and fastest-growing category in the software space over the coming years, while cloud will be the fastest-growing component in terms of hardware."

(image)

Follow CircleID on Twitter

More under: Blockchain




Software-Defined Networking: What's New, and What's New for Tech Policy?

2018-02-12T09:40:00-08:00

The Silicon Flatirons Conference on Regulating Computing and Code is taking place in Boulder. The annual conference addresses a range of issues at the intersection of technology and policy and provides an excellent look ahead to the tech policy issues on the horizon, particularly in telecommunications. I was looking forward to yesterday's panel on "The Triumph of Software and Software-Defined Networks", which had some good discussion on the ongoing problem surrounding security and privacy of the Internet of Things (IoT); some of the topics raised echoed points made on a Silicon Flatirons panel last year. My colleague and CITP director Ed Felten made some lucid, astute points about the implications of the "infiltration" of software into all of our devices. Unfortunately, though (despite the moderator's best efforts!), the panel lacked any discussion of the forthcoming policy issues concerning Software-Defined Networking (SDN); I was concerned with some of the incorrect comments concerning SDN technology. Oddly, two panelists stated that Software Defined Networking has offered "nothing new". Here's one paper that explains some of the new concepts that came from SDN (including the origins of those ideas), and another that talks about what's to come as machine learning and automated decision-making begin to drive more aspects of network management. Vint Cerf corrected some of this discussion, pointing out one example of a fundamentally new capability: the rise of programmable hardware. One of same panelists also said that SDN hasn't seen any deployments in the wide-area Internet or at interconnection, a statement that has many counter-examples, including projects such as SDX (and the related multi-million dollar NSF program), Google's Espresso and B4, and Facebook's Edge Fabric to name just a few of the public examples. Some attendees commented that the panel could have discussed how SDN, when coupled with automated decision-making ("AI" in the parlance du jour) presents both new opportunities and challenges for policy. This post attempts to bring some of the issues at the intersection of SDN and policy to light. I address two main questions: What are the new technologies around SDN that people working in tech policy might want to know about?; What are some interesting problems at the intersection of SDN and tech policy? The first part of the post summarizes about 15 years of networking research in three paragraphs, in a form that policy and law scholars can hopefully digest; the second part of the post are some thoughts about new and interesting policy and legal questions — both opportunities and challenges — that these new technologies bring to bear. SDN: What's New in Technology? Software-defined networking (SDN) describes a type of network design where a software program runs separately from the underlying hardware routers and switches can control how traffic is forwarded through the network. While in some sense, one might think of this concept as "nothing new" (after all, network operators have been pushing configuration to routers with Perl scripts for decades), SDN brings several new twists to the table: The control of a collection of network devices from a single software program, written in a high-level programming language. The notion that many devices can be controlled from a single "controller" creates the ability for coordinated decisions across the network, as opposed to the configuration of each router and switch essentially being configured (and acting) independently. When we first presented this idea for Internet routing in the mid-2000s, this was highly controversial, with some even claiming that this was "[...]



What's So Outrageous Asking High Prices for Domain Names?

2018-02-12T09:28:00-08:00

Panels appointed to hear and decide disputes under the Uniform Domain Name Dispute Resolution Policy (UDRP) have long recognized that three letter domains are valuable assets. How investors value their domains depends in part on market conditions. Ordinarily (and for good reason) Panels do not wade into pricing because it is not a factor on its own in determining bad faith. That is why a Panel of distinguished members' decision to transfer — Autobuses de Oriente ADO, S.A. de C.V. v. Private Registration / Francois Carrillo, D2017-1661 (WIPO February 1, 2018) — received what in polite society is known as a "Bronx Cheer." Morgan Linton headlined: "ADO.com is lost in a UDRP due to its $500,000 price tag the same day DAX.com sells for $500,000." Andrew Allemann's blunt comment in DomainNameWire was "WIPO panel screws Domaining.com owner Francois Carrillo out of Ado.com" (explaining that the Panel gave improper weight to the price). And Raymond Hackney declares that "The ADO.com decision brings up another potential problem" (referring to the logo analysis that the three-member Panel found persuasive in reaching its decision). The single most prominent reason long-held domain names are lost is the failure to properly curate (by which I mean populating the website with bad faith content from which registration in bad faith can be inferred). Price is not a factor for bad faith without concrete proof of the 4(b)(i) elements, yet in Autobuses de Oriente price was elevated as a prominent factor. The Panel also condemned Respondent because it was passively holding and offering it for sale on a page that included other domain names each with a designed logo. Passive holding, too, a not a factor when considered alone; but when combined with other factors bad faith registration can be inferred. Does the Autobuses de Oriente decision deserve the universal condemnation it has received? (The three industry bloggers noted above are of the view the Panel put their combined fingers on the scale, and I think that criticism is fair). What constitutes concrete and "fake" evidence is worth exploring because it makes investors (large and small) of random letter domain names vulnerable to Complainants who claim the letters are not random but infringing. No doubt, this is a difficult area for Panels. 2017 saw some notable decisions on three-letter domain names, going both ways. was lost, but was not. What we know from the summary of the record in Autobuses de Oriente is that Respondent acquired in 2012 from an earlier holder whose website (allegedly) contained infringing links to transportation. Ordinarily, a successor is not held responsible for its predecessor's conduct, as long as it does not continue the bad faith after its acquisition (my emphasis). Here, the Panel conjectured that even if Respondent did not know of Complainant's (allegedly) "famous" mark, it was guilty of "willful blindness": [It] does not excuse willful blindness in this case, as it seems apparent from the record that even a cursory investigation by Respondent would have disclosed Complainant's mark especially given the use made of the Domain Name of which Respondent was aware when negotiating for the Domain Name. But, what would a "cursory investigation" have revealed? Well, it would have revealed that the website contained links to transportation, but up to that point in time there had been no UDRP claims from Autobuses de Oriente, so why would an investor (or any ordinary registrant for that matter) "know" that the links were infringing? To have determined that the domain n[...]



Pyeongchang Olympics Organizers Investigating Possible Cyberattack on Opening Day

2018-02-10T09:45:00-08:00

Reports from various sources indicate Pyeongchang Olympics organizers were looking into a disruption of non-critical systems on the day of the opening ceremony but could not yet confirm if it was a cyberattack. Karolos Grohmann reporting in Reuters: "Some local media reported system problems, including the Games website and some television sets, were due to a cyberattack but [Games spokesman] Sung said it was still too early to determine whether hackers had attempted to damage them. ... There were some issues that affected some of our non-critical systems last night for a few hours ... Experts are watching to ensure and maintain any systems at expected service levels. We are currently investigating the cause of the issue. At this time we cannot confirm [a cyberattack]."

Follow CircleID on Twitter

More under: Cyberattack




Foggy Bottom's New Cyberspace Bureau "Lines of Effort": Dumb and Dumber

2018-02-10T08:17:00-08:00

The release of the Tillerson letter to the House Committee on Foreign Affairs describes the State Department's new "Cyber Bureau" together with its "primary lines of effort." The proposal is said to be designed to "lead high-level diplomatic engagements around the world." Two of those "efforts" deserve special note and provide an entirely new spin on the affectionate local term for the Department — Foggy Bottom. While a few of the "efforts" are longstanding reasonable roles, most evince a new bilateral "America First" belligerence. Two deserve to be called "dumb and dumber." Perhaps they are best described in a hypothetical dialogue between a US high-level diplomat (call him Donald) and one from a foreign country (call him Vlad). [With apologies to SNL.] Donald: I'm here today to tell you about two of our new Cyber Bureau dictates...I mean efforts. Vlad: Please do tell. Donald: The world must "maintain open and interoperable character [sic] of the Internet." Vlad: Well said, Donald. Your foolish effort will greatly help our intelligence service penetrate American infrastructure and further manipulate elections! Will also help extend effort to other countries. It is better you spend money on huge wall. (wink, wink) Donald: I have more. The Cyber Bureau also says that everyone must "facilitate the exercise of human rights, including freedom of speech and religion through the Internet." Vlad: Well said again, Donald! Your effort will greatly assist comrade Assange, and his colleagues get all those WikiLeaks streamed out to the world. Our intelligence agents have more coming — including through their social media bots. After all, our intelligence agents have their rights too. (wink, wink) We can also get all of your Nazis to get their message out and disrupt your society. Hurray for internet freedom of speech. Donald: Thank you Vlad for supporting our Cyber Bureau efforts. Vlad: Your bilateral bullying efforts and isolation will greatly assist our diplomats in increasing our global stature. However, you have to be careful. You don't want to look like you are supporting Hillary's Internet Arab Spring strategy or ISIS getting their messaging out. If there is intelligent life left in the U.S. Congress, it should remind Tillerson that for over a hundred years, American cyber diplomacy was based on a strategy of technology neutrality and not "politicizing" the focus on the common global interest in "facilitating peaceful relations, international cooperation among peoples and economic and social development by means of efficient telecommunication services." The text is the preamble of a treaty that every nation on Earth, including the U.S., has signed scores of times over the past 168 years. The strategy is also a pragmatic one because as the treaty text notes at the outset everyone "fully recogniz[es] the sovereign right of each State to regulate its telecommunication," and communications at borders can be stopped. That long-standing strategy was abandoned twenty years ago when the Clinton Administration seized upon one particular technology platform — the DARPA academic research internet — and sought to evangelize it as the world's unfettered technology mandate. It was packaged as a utopian vision of happiness and economic wealth for all, while having a plethora of fatal flaws and disastrous potential effects with no effective mitigations. It should never have been allowed into the public infrastructure. The Cyber Bureau mandate urgently needs to be re-written. Written by Anthony Rutkowski, Principal, Netmagic Associates LLCFol[...]



The Internet Association Releases Letter Backing Senate Effort to Reinstate Net Neutrality Rules

2018-02-08T21:06:00-08:00

The Internet Association (IA) whose members include the likes of Google, Amazon and Facebook, on Thursday issued a letter addressed to Senate Majority Leader Mitch McConnell (R-Ky.) and Minority Leader Charles Schumer (D-N.Y.) in support of the reinstatement of FCC rules. From the letter: "The FCC's recent Restoring Internet Freedom Order (the "Order") represents the complete reversal of broad, bipartisan consensus in the operation of the internet, and leaves consumers with no meaningful protections to ensure their access to the entire internet. The current Order should not stand, and IA supports all efforts — including comprehensive bipartisan legislation — to restore strong, enforceable net neutrality protections at the federal level. To that end, IA supports the Senate Congressional Review Act resolution to invalidate the Federal Communications Commission's January 4, 2018, Restoring Internet Freedom Order. While the CRA will help alleviate immediate concerns, the internet industry urges Congress to legislate a permanent solution."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Net Neutrality, Policy & Regulation




ICANN Cancels .CORP, .HOME, and .MAIL TLDs Indefintley Due to Collision Concerns

2018-02-08T20:30:00-08:00

ICANN has announced that it has indefinitely deferred the delegations of the new TLDs .CORP, .HOME, and .MAIL due to the high-risk nature of the strings. The domains name system overseer has determined the said TLDs can cause name collisions, the overlap of private and public namespaces which may result in unintended and harmful results. "The introduction of any new domain name into the DNS at any level creates the potential for name collision [however] the New gTLD Program has brought renewed attention to this issue of queries for undelegated TLDs at the root level of the DNS because certain applied-for new TLD strings could be identical to name labels used in private networks." ICANN says the applicants of the TLDs will be refunded the full application fee of $185,000.

Follow CircleID on Twitter

More under: DNS, ICANN, New TLDs