Subscribe: CircleID
http://www.circleid.com/rss/rss_intblog/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
access  circleid twittermore  circleid  cuba  data  domain  follow circleid  gdpr  icann  internet  network  new  service  whois 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2018-02-21T13:30:00-08:00

 



Report Estimates Cybercrime Taking $600 Billion Toll on Global Economy

2018-02-21T13:30:00-08:00

Cybercrime is costing businesses close to $600 billion, or 0.8 percent of global GDP, according to a report released today by McAfee, in partnership with the Center for Strategic and International Studies (CSIS). The estimated number is up from a similar 2014 study that put global losses at about $445 billion. The report attributes this growth to cybercriminals quickly adopting new technologies, the ease of engaging in cybercrime — including an expanding number of cybercrime centers — and the growing financial sophistication of top-tier cybercriminals. Estimated daily cybercrime activitySource: McAfee / CSIS 2018 report— From the report: "Cybercrime operates at scale. The amount of malicious activity on the internet is staggering. One major internet service provider (ISP) reports that it sees 80 billion malicious scans a day, the result of automated efforts by cybercriminals to identify vulnerable targets. Many researchers track the quantity of new malware released, with estimates ranging from 300,000 to a million viruses and other malicious software products created every day. Most of these are automated scripts that search the web for vulnerable devices and networks. Phishing remains the most popular and easiest way to commit cybercrime, with the Anti-Phishing Working Group (APWG) recording more than 1.2 million attacks in 2016, many linked to ransomware. This number may be low since the FBI estimated there were 4,000 ransomware attacks every day in 2016. The Privacy Rights Clearing House estimates there were 4.8 billion records lost as a result of data breaches in 2016, with hacking responsible for about 60% of these." — Data on cybercrime remains poor: The authors suggest data on cybercrime remains poor because of governments around the world underreporting and being negligent in their efforts to collect data on cybercrime. — Recommendations: Although the report is mainly focused on cybercrime estimations, and not recommendations, it has offered the following as a matter of obvious steps based on their cost analysis: Uniform implementation of basic security measures such as regular updating, patching, open security architectures and investment in defensive technologies. Increased cooperation among international law enforcement agencies both with other nations' law enforcement agencies and with the private sector. Improved collection of data by national authorities Greater standardization and coordination of cybersecurity requirements particularly in key sectors like finance. Development of the Budapest Convention, a formal treaty on cybercrime which has made slow progress in the face of opposition from Russia and other countries. International pressure on state sanctuaries for cybercrime; imposing some kind of penalty or consequence on governments that fail to take action against cybercrime. Follow CircleID on TwitterMore under: Cyberattack, Cybercrime, Cybersecurity, DDoS, Internet Governance, Malware, Policy & Regulation [...]



ICANN Spearheading Launch of Virtual DNS Entrepreneurship Center of the Caribbean

2018-02-21T11:44:00-08:00

The Internet Corporation for Assigned Names and Numbers (ICANN) is spearheading an initiative to launch Virtual DNS Entrepreneurship Center of the Caribbean (VDECC). Gerard Best reporting in the Caribbean Journal: "VDECC aims to open up new money-making opportunities in the DNS industry for Internet businesses and entrepreneurs across the region, including Internet service providers, web hosting companies, top-level domain operators, domain name registrars and resellers, web developers, digital marketers, e-commerce startups and Internet legal experts." The initiative was launched in Port of Spain on Feb. 19.

Follow CircleID on Twitter

More under: DNS, ICANN




Vermont Governor 5th to Take a Stand Against Rollback of Net Neutrality Rules

2018-02-21T11:14:00-08:00

Vermont Gov. Phil Scott is the latest state governor to take a stand against the FCC's rollback of net neutrality rules. Ryan Johnston reporting in StateScoop: "Scott last week took executive action mandating that any internet service provider (ISP) holding or seeking a state contract must include net neutrality protections in its services for all subscribers. He becomes the fifth governor to use the tactic, which is intended to pressure ISPs to operate as if the FCC did not repeal the Obama-era rules."

Follow CircleID on Twitter

More under: Access Providers, Net Neutrality




WHOIS Access and Interim GDPR Compliance Model: Latest Developments and Next Steps

2018-02-20T12:17:00-08:00

WHOIS access and development of an interim GDPR compliance model remains THE hot topic within the ICANN community. Developments are occurring at a break-neck pace, as ICANN and contracted parties push for an implementable solution ahead of the May 25, 2018 effective date of the GDPR. To quickly recap: Between November 11, 2017 and January 11, 2018, various ICANN community participants submitted different proposed interim GDPR compliance models to ICANN; On January 12, 2018, ICANN published a set of three proposed interim GDPR compliance models of its own design for community input; On January 24, 2018, the ICANN Intellectual Property and Business Constituencies (IPC and BC, respectively) held a community-wide webinar, with in-person attendees in Washington, DC and Brussels, to discuss the ICANN and community models, and key issues and concerns in developing an interim compliance model while preserving access to WHOIS data for specific legitimate purposes, including law enforcement, cybersecurity, consumer protection, and intellectual property enforcement, among other business and individual user needs; On January 29, 2018, ICANN formally closed its community input period on the compliance models; On February 1, 2018, the IPC and BC sent a joint letter to the Article 29 Working Party, with a copy to ICANN, providing an overview of WHOIS uses and needs for law enforcement, cybersecurity, consumer protection and intellectual property enforcement, and how these legitimate purposes fit within the framework of the GDPR; On February 2, 2018, ICANN published a matrix of all the proposed interim compliance models, and a draft summary of discussion and comments regarding the models; On February 7, 2018, the European Commission provided additional input to ICANN regarding the various proposed compliance models; and Between February 10 and February 16, 2018, ICANN provided updates to various community leaders regarding a compliance model that ICANN had begun to coalesce around, based on the prior models, community input, and community discussions (the "convergence model"). ICANN is now poised to formally publish the convergence model, although the community continues to discuss and seek a solution that is acceptable for all stakeholders. As part of those continued discussions, the IPC and BC will be hosting another cross-community discussion, following up on their co-hosted event on January 24. This second event will take place on Thursday February 22, 2018 from 9 am to 12 pm Eastern (US) (1400 – 1700 UTC), with in-person participation in the Winterfeldt IP Group Offices in Washington, DC and the ICANN office in Brussels, Belgium. There will also be remote participation available through Adobe Connect. We invite all readers to participate in this important ongoing conversation. Please RSVP to denise@winterfeldt.law if you or your colleagues would like to join in person in Washington, DC or Brussels, or via remote participation. Written by Brian Winterfeldt, Founder and Principal at Winterfeldt IP GroupFollow CircleID on TwitterMore under: Domain Names, ICANN, Law, Privacy, Whois [...]



SpaceX Starlink and Cuba - A Match Made in Low-Earth Orbit?

2018-02-20T11:05:00-08:00

I've suggested that Cuba could use geostationary-orbit (GSO) satellite Internet service as a stopgap measure until they could afford to leapfrog over today's technology to next-generation infrastructure. They did not pick up on that stopgap suggestion, but how about low-Earth orbit (LEO) satellite Internet service as a next-generation solution? SpaceX, OneWeb, Boeing and others are working on LEO satellite Internet projects. There is no guarantee that any of them will succeed — these projects require new technology and face logistical, financial and regulatory obstacles — but, if successful, they could provide Cuba with affordable, ubiquitous, next-generation Internet service. Cuba should follow and consider each potential system, but let's focus on SpaceX since their plan is ambitious and they might have the best marketing/political fit with Cuba. LEO satellite service will hopefully reach a milestone this week when SpaceX launches two test satellites. If the tests go well, SpaceX plans to begin launching operational satellites in 2019 and begin offering commercial service in the 2020-21 time frame. They will complete their first constellation of 4,425 satellites by 2024. (To put that in context, there are fewer than 2,000 operational satellites in orbit today). SpaceX has named their future service "Starlink," and, if Starlink succeeds, they could offer Cuba service as early as 2020 and no later than 2024 depending upon which areas they plan to service first. What has stopped the Cuban Internet and why might LEO satellites look good to Cuba? Cuba blames their lack of connectivity on the US embargo, but President Obama cleared the way for the export of telecommunication equipment and services to Cuba and Trump has not reversed that decision. I suspect that fear of losing political control — the inability to filter and surveil traffic — stopped Cuba from allowing GSO satellite service. Raúl Castro and others feared loss of control of information when Cuba first connected to the Internet in 1996, but Castro is about to step down and perhaps the next government will be more aware of the benefits of Internet connectivity and more confident in their ability to use it to their advantage. A lack of funds has also constrained the Cuban Internet — they cannot afford a large terrestrial infrastructure buildout and are reluctant (for good and bad reasons) to accept foreign investment. SpaceX is building global infrastructure so the marginal cost of serving Cuba would be near zero. They say that the capital equipment for providing high-speed, low-latency service to a Cuban home, school, clinic, etc. would be a low-cost, user-installed ground-station. I've not seen ground-station price estimates from SpaceX, but their rival OneWeb says their $250 ground-station will handle a 50 Mbps, 30 ms latency Internet link and serve as a hot-spot for WiFi, LTE, 3G or 2G connectivity. Since the marginal cost of serving a nation would be small and they hope to provide affordable global connectivity, I expect their service price will vary among nations. Prices would be relatively high in wealthy and low in poor nations — there would be no point in having idle satellites flying over Cuba or any other place. Expansion of the Cuban Internet is also constrained by bureaucracy and vested financial interest in ETECSA and established vendors. While I do not endorse Cuba's current monopoly service and infrastructure ownership policy, it could remain unchanged if ETECSA were to become a reseller of SpaceX Internet connectivity. In summary, if Starlink succeeds, they could offer affordable, ubiquitous high-speed Internet, saving Cuba the cost of investing in expensive terrestrial infrastructure and allowing ETECSA to maintain its monopoly. The only intangible roadblock would be a loss of control of traffic. (But Cuban propagandists and trolls would be able to reach a wider audience :-). [...]



Hackers Use Tesla's Amazon Cloud Account to Mine Cryptocurrency

2018-02-20T10:37:00-08:00

Tesla's cloud environment has been infiltrated by hackers and used to mine cryptocurrencies, researchers have discovered. Other victims include Aviva and Gemalto. According to reports, the incident was first discovered by security company RedLock a few months ago when its research team found hundreds of Kubernetes administration consoles accessible over the internet without any password protection.

Initially RedLock discovered instances belonging to Aviva, a British multinational insurance company, and Gemalto, the world's largest manufacturer of SIM cards. From the report: "Within these consoles, access credentials to these organizations' Amazon Web Services (AWS) and Microsoft Azure environments were exposed. Upon further investigation, the team determined that hackers had secretly infiltrated these organizations' public cloud environments and were using the compute instances to mine cryptocurrencies (refer to Cloud Security Trends - October 2017 report). Since then, a number of other cryptojacking incidents have been uncovered and there are notable differences in the attacks. ... latest victim of cryptojacking is Tesla. While the attack was similar to the ones at Aviva and Gemalto, there were some notable differences. The hackers had infiltrated Tesla's Kubernetes console which was not password protected. Within one Kubernetes pod, access credentials were exposed to Tesla's AWS environment which contained an Amazon S3 (Amazon Simple Storage Service) bucket that had sensitive data such as telemetry."

Follow CircleID on Twitter

More under: Blockchain, Cloud Computing, Cyberattack, Cybersecurity




Botnets Shift Focus to Credential Abuse, Says Latest Akamai Report

2018-02-20T09:49:00-08:00

(image)

Akamai's Fourth Quarter, 2017 State of the Internet, was released today in which it states that the analysis of more than 7.3 trillion bot requests per month has found a sharp increase in the threat of credential abuse, with more than 40 percent of login attempts being malicious. Additionally, the report warns DDoS attacks remain a consistent threat and the Mirai botnet is still capable of strong bursts of activity.

14% Increase in DDoS: "Akamai's findings also confirmed that the total number of DDoS attacks last quarter (Q4 2017) increased 14 percent from the same time last year (Q4 2016). While previous reports from this year showed the intensity of the Mirai botnet fading, Akamai saw a spike of nearly 1 million unique IP addresses from the botnet scanning the Internet in late November, showing that it is still capable of explosive growth."

Cybercriminals are increasingly leveraging bot activity for malicious use: "Many of the botnets traditionally responsible for DDoS attacks are being used to abuse stolen login credentials. Of the 17 billion login requests tracked through the Akamai platform in November and December, almost half (43 percent) were used for credential abuse."

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, DDoS




US Congress Considering Legislation to Authorize Faster Access to International Electronic Data

2018-02-19T12:15:00-08:00

A legislation called, Clarifying Lawful Overseas Use of Data Act, or Cloud Act, was introduced on Monday by Congress aimed at creating a clearer framework for law enforcement to access data stored in cloud computing systems. Ali Breland reporting in The Hill: "[The] bill is aimed at making it easier for U.S. officials to create bilateral data sharing agreements that allow them to access data stored overseas and also for foreign law enforcement to access data stored on U.S. firms' servers. ... Federal law currently doesn't specify whether the government can demand that U.S. companies give it data they have stored abroad. The CLOUD Act would amend this, likely impacting Microsoft's pending Supreme Court case over data it has stored in Ireland."

Follow CircleID on Twitter

More under: Cloud Computing, Data Center, Law




U.S. Lawmakers Moving to Consider New Rules Imposing Stricter Federal Oversight on Cryptocurrencies

2018-02-19T12:00:00-08:00

Reuters reports today that several top lawmakers have revealed a "bipartisan momentum is growing in the Senate and House of Representatives for action to address the risks posed by virtual currencies to investors and the financial system." David Morgan
reports: "Even free-market Republican conservatives, normally wary of government red tape, said regulation could be needed if cryptocurrencies threaten the U.S. economy. ... Much of the concern on Capitol Hill is focused on speculative trading and investing in cryptocurrencies, leading some lawmakers to push for digital assets to be regulated as securities and subject to the SEC’s investor protection rules."

Follow CircleID on Twitter

More under: Blockchain, Law, Policy & Regulation




SpaceX Launching Two Experimental Internet Satellites This Weekend

2018-02-16T13:10:00-08:00

On Saturday, SpaceX will be launching two experimental mini-satellites that will pave the path for the first batch of what is planned to be a 4,000-satellite constellation providing low-cost internet around the earth. George Dvorsky reporting in Gizmodo: "Announced back in 2015, Starlink is designed to be a massive, space-based telecommunications network consisting of thousands of interlinked satellites and several geographically dispersed ground stations. ... The plan is to have a global internet service in place by the mid-2020s, and get a leg-up on potential competitors. ... Two prototypes, named Microsat 2a and 2b, are now packed and ready for launch atop a Falcon-9 v1.2 rocket."

Follow CircleID on Twitter

More under: Access Providers, Broadband, Wireless




A Brooklyn Bitcoin Mining Operation is Causing Interference to T-Mobile's Broadband Network

2018-02-16T10:53:00-08:00

(image) AntMiner S5 Bitcoin Miner by Bitmain released in 2014. S5 has since been surpassed by newer models.The Federal Communications Commission on Thursday sent a letter to an individual in Brooklyn, New York, alleging that a device in the individual's residence used to mine Bitcoin is generating spurious radiofrequency emissions, causing interference to a portion of T-Mobile's mobile telephone and broadband network. The letter states the FCC received a complaint from T-Mobile concerning interference to its 700 MHz LTE network in Brooklyn, New York. In response to the complaint, agents from the Enforcement Bureau's New York Office confirmed by using direction finding techniques that radio emissions in the 700 MHz band were, in fact, emanating from the user's residence in Brooklyn. "When the interfering device was turned off the interference ceased. ... The device was generating spurious emissions on frequencies assigned to T-Mobile's broadband network and causing harmful interference." FCC's warning letter further states that user's "Antminer s5 Bitcoin Miner" operation constitutes a violation of the Federal laws and could subject the operator to severe penalties including substantial monetary fines and arrest.

Jessica Rosenworcel, FCC Commissioner, in a tweet said: "Okay, this @FCC letter has it all: #bitcoin mining, computing power needed for #blockchain computation and #wireless #broadband interference. It all seems so very 2018."

Follow CircleID on Twitter

More under: Access Providers, Blockchain, Broadband, Telecom, Wireless




Hackers Earned Over $100K in 20 Days Through Hack the Air Force 2.0

2018-02-16T07:47:01-08:00

(image) The participating U.S. Airmen and hackers at the conclusion of h1-212 in New York City on Dec 9, 2017

HackerOne has announced the results of the second Hack the Air Force bug bounty challenge which invited trusted hackers from all over the world to participate in its second bug bounty challenge in less than a year. The 20-day bug bounty challenge was the most inclusive government program to-date, with 26 countries invited to participate. From the report: "Hack the Air Force 2.0 is part of the Department of Defense's (DoD) Hack the Pentagon crowd-sourced security initiative. Twenty-seven trusted hackers successfully participated in the Hack the Air Force bug bounty challenge — reporting 106 valid vulnerabilities and earning $103,883. Hackers from the U.S., Canada, United Kingdom, Sweden, Netherlands, Belgium, and Latvia participated in the challenge. The Air Force awarded hackers the highest single bounty award of any Federal program to-date, $12,500."

Follow CircleID on Twitter

More under: Cybersecurity




WHOIS Inaccuracy Could Mean Noncompliance with GDPR

2018-02-15T12:41:00-08:00

The European Commission recently released technical input on ICANN's proposed GDPR-compliant WHOIS models that underscores the GDPR's "Accuracy" principle — making clear that reasonable steps should be taken to ensure the accuracy of any personal data obtained for WHOIS databases and that ICANN should be sure to incorporate this requirement in whatever model it adopts. Contracted parties concerned with GDPR compliance should take note. According to Article 5 of the regulation, personal data shall be "accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay." This standard is critical for maintaining properly functioning WHOIS databases and would be a significant improvement over today's insufficient standard of WHOIS accuracy. Indeed, European Union-based country code TLDs require rigorous validation and verification, much more in line with GDPR requirements — a standard to strive for. The stage is set for an upgrade to WHOIS accuracy: ICANN's current approach to WHOIS accuracy simply does not comply with GDPR. Any model selected by ICANN to comply with GDPR must be accompanied by new processes to validate and verify the contact information contained in the WHOIS database. Unfortunately, the current Registrar Accreditation Agreement, which includes detailed provisions requiring registrars to validate and verify registrant data, does not go far enough to meet these requirements. At a minimum, ICANN should expedite the implementation of cross-field validation as required by the 2013 RAA, but to date has not been enforced. These activities should be supplemented by examining other forms of validation, building on ICANN's experience in developing the WHOIS Accuracy Reporting System (ARS), which examines accuracy of contact information from the perspective of syntactical and operational validity. Also, validation and accuracy of WHOIS data has been a long-discussed matter within the ICANN community — with the 2014 Final Report from the Expert Working Group on gTLD Directory Services: A Next-Generation Registration Directory Service (RDS) devoting an entire chapter to "Improving Data Quality" with a recommendation for more robust validation of registrant data. And, not insignificantly, ICANN already has investigated and deployed validation systems in its operations, including those in use by its Compliance department to investigate accuracy complaints. Despite its significance to the protection and usefulness of WHOIS data, the accuracy principle is surprisingly absent from the three WHOIS models presented by ICANN for discussion among relevant stakeholders. Regardless of which model is ultimately selected, the accuracy principle must be applied to any WHOIS data processing activity in a manner that addresses GDPR compliance — both at inception, when a domain is registered, and later, when data is out of date. All stakeholders can agree that WHOIS data is a valuable resource for industry, public services, researchers, and individual Internet users. Aside from the GDPR "Accuracy" principle, taking steps to protect the confidentiality of this resource would be meaningless if the data itself were not accurate or complete. Written by Fabricio Vayra, Partner at Perkins Coie LLPFollow CircleID on TwitterMore under: Domain Names, ICANN, Privacy, Whois [...]



Who Will Crack Cloud Application Access SLAs?

2018-02-14T12:14:01-08:00

The chart below ought to be in every basic undergraduate textbook on packet networking and distributed computing. That it is absent says much about our technical maturity level as an industry. But before we look at what it means, let's go back to some basics. When you deliver a utility service like water or gas, there's a unit for metering its supply. The electricity wattage consumed by a room is the sum of the wattage of the individual appliances. The house consumption is the sum of the rooms, the neighbourhood is the sum of the houses, and so on. Likewise, we can add up the demand for water, using litres. These resource units "add up" in a meaningful way. We can express a service level agreement (SLA) for utility service delivery in that standard unit in an unambiguous way. This allows us to agree both the final end-user delivery, as well as to contract supply at any administrative or management boundaries in the delivery network. What's really weird about the broadband industry is that we've not yet got a standard metric of supply and demand that "adds up." What's even more peculiar is that people don't even seem to be aware of its absence, or feel the urge to look for one. What's absolutely bizarre is that it's hard to get people interested even when you do finally find a really good one! Picking the right "unit" is hard because telecoms is different to power and water in a crucial way. With these physical utilities, we want more of something valuable. Broadband is an information utility, where we want less of something unwanted: latency (and in extremis, loss). That is a tricky conceptual about-turn. So we're selling the absence of something, not its presence. It's kind of asking "how much network latency mess-up can we deal with in order to deliver a tolerable level of application QoE screw-up”. Ideally, we'd like zero "mess-up" and "screw-up," but that's not on offer. And no, I don't expect ISPs to begin advertising "a bit less screwed-up than the competition" anytime soon to consumers! The above chart breaks down the latency into its independent constituent parts. What it says is: For any network (sub-)path, the latency comprises (G)eographic, packet (S)ize, and (V)ariable contention delay — the "vertical" (de)composition. Along the "horizontal" path the "Gs", "Ss", and "Vs" all "add up". (They are probabilities, not simple scalars, but it's still just ordinary algebra.) You can "add up" the complete end-to-end path "mess-up" by breaking each sub-path "mess-up" into G, S and V; then adding the Gs, Ss, and Vs "horizontally"; and then "vertically" recombining their "total mess-up" (again, all using probability functions to reflect we are dealing with randomness). And that's it! We've now got a mathematics of latency which "adds up", just like wattage or litres. It's not proprietary, nobody holds a patent on it, everyone can use it. Any network equipment or monitoring enterprise with a standard level of competence can implement it as their network resource model. It's all documented in the open. This may all seem a bit like science arcana, but it has real business implications. Adjust your retirement portfolio accordingly! Because it's really handy to have a collection of network SLAs that "add up" to a working telco service or SaaS application. In order to do that, you need to express them in a unit that "adds up". In theory, big telcos are involved in a "digital transformation" from commodity "pipes" into cloud service integration companies. With the occasional honourable exception (you know who you are!), there doesn't seem to be much appetite for engaging with fundamental science and engineering. Most major telcos are technological[...]



Donuts Acquires .TRAVEL TLD

2018-02-14T11:14:00-08:00

Donuts Inc. today announced it has acquired the .TRAVEL domain name from registry operator Tralliance Registry Management Company; the .TRAVEL domain becomes Donuts' 239th TLD. From the annoucement: "Since its launch in 2005, the .TRAVEL domain has been embraced by the travel industry. Domain names ending in .TRAVEL now identify tens of thousands of travel businesses and organizations on the Internet. The .TRAVEL domain is widely recognized as of the highest quality, and is used by leading travel businesses such as: visitloscabos.travel, adventures.travel, hongkongdisneyland.travel, goldman.travel, AARP.travel and tens of thousands of others."

Follow CircleID on Twitter

More under: Domain Names, Registry Services, New TLDs




GDPR - Territorial Scope and the Need to Avoid Absurd and Inconsistent Results

2018-02-14T09:54:00-08:00

It's not just establishment it's context! There is an urgent need to clarify the GDPR's territorial scope. Of the many changes the GDPR will usher in this May, the expansion of EU privacy law's territorial scope is one of the most important. The GDPR provides for broad application of its provisions both within the EU and globally. But the fact that the GDPR has a broad territorial scope does not mean that every company, or all data processing activities, are subject to it. Rather, the GDPR puts important limitations on its territorial scope that must be acknowledged and correctly analyzed by those interpreting the regulation for the global business community. Otherwise, it could lead to absurd implementation and bad policy which no one wants. EU Establishment In essence: Where registrars are established in the EU, the registrars' use and processing of personal data is subject to the GDPR. That is no surprise to anyone. Where registrars have no establishment in the EU, but offer domain name registration services to data subjects in the EU, the processing of personal data in the context of such offer will also be subject to the GDPR. Again no surprise and logical. However, where a registrar is based outside the EU, without an establishment in the EU, and uses a processor in the EU, such non-EU based registrar (as a controller) will not be subject to the GDPR due to the EU based processor's establishment in the EU. The GDPR only applies to the controller according to Article 3 (1) GDPR where the processor in the EU would be considered the controller's establishment. If the controller uses an external service provider (no group company), this processor will generally not be considered an establishment of the controller. It would only be caught by GDPR if the processing is done "in the context" of that establishment. That is the key, and I'll discuss an example of potentially absurd results if this is not interpreted correctly. NB All obligations directly applicable to the processor under the GDPR will, of course, apply to the EU based processor. WHOIS If we look at the example of WHOIS (searchable registries of domain name holders) where there is presently much debate amongst the many and varied actors in the domain name industry over whether public WHOIS databases can remain public under the GDPR. The second part of ICANN's independent assessment of this issue offered an analysis of the GDPR's territorial reach that deserves closer scrutiny. Addressing the territorial limits of the law, the authors state: "Therefore, all processing of personal data is, no matter where it is carried out, within the territorial scope of the GDPR as long as the controller or processor is considered established within the EU; the nationality, citizenship or location of the data subject is irrelevant." In other words, the authors conclude that as long as a controller or processor has an "establishment" in the EU, all processing of personal data it undertakes, regardless of the location or nationality of the data subject and regardless of whether the processing has any nexus to the EU, is subject to the GDPR. This is wrong. The analysis overlooks key language of the GDPR. Under Article 3.1, the law applies not to any processing that is done by a company that happens to have an establishment in the EU, but to processing done "in the context of" that establishment. This distinction makes a difference. Imagine, for example, a Canadian company that has an office in Paris. Under the authors' analysis, the GDPR would apply to all processing done by that company simply by virtue of it having a Paris office, whether the data subjects inte[...]



The Future of .COM Pricing

2018-02-13T08:59:00-08:00

When you've been around the domain industry for as long as I have, you start to lose track of time. I was reminded late last year that the 6-year agreement Verisign struck with ICANN in 2012 to operate .com will be up for expiration in November of this year. Now, I don't for a second believe that .com will be operated by any other party, as Verisign's contract does give them the presumptive right of renewal. But what will be interesting to watch is what happens to Verisign's ability to increase the wholesale cost of .com names. The 2012 agreement actually afforded Verisign the ability to increase prices by 7%, up to four times over the 6-year course of the contract. However, when the US Commerce Department approved the agreement, it did so without the ability for Verisign to implement those price increases. At that time, the wholesale price of a .com domain was $7.85, and that's where it stands today with the prices to registrars being frozen. Under the terms of the original 2012 agreement, .com prices could have been as high as $10.26 today had Verisign taken advantage of their price increases. As an aside, I've long thought that the price of a single .com domain was incredibly inexpensive when you think about it in comparison to other costs of running a business. While I don't have any concrete insight into whether the price freeze will continue, there is obviously a new administration in Washington DC. Their view on this agreement could be different than the previous administration. Since this administration has come into office, we have seen a number of pro-business initiatives undertaken, so perhaps that will carry over to the Verisign agreement as well. Another big difference today is that the domain market, in general, is vastly different than it was in 2012 — with the introduction of hundreds of new gTLDs. There are exponentially more alternatives to .com today than there were 6 years ago, so it's possible that too will have an impact on the decision. With over 131 million registered .com names, it will be interesting to see how a potential increase of a few dollars per name would play out in the market, and the impact that it would have on corporate domain portfolios which are still largely comprised of .com names. Written by Matt Serlin, SVP, Client Services and Operations at BrandsightFollow CircleID on TwitterMore under: Domain Names, ICANN [...]



Why Is It So Hard to Run a Bitcoin Exchange?

2018-02-13T08:42:00-08:00

One of the chronic features of the Bitcoin landscape is that Bitcoin exchanges screw up and fail, starting with Mt. Gox. There's nothing conceptually very hard about running an exchange, so what's the problem? The first problem is that Bitcoin and other blockchains are by design completely unforgiving. If there is a bug in your software which lets people steal coins, too bad, nothing to be done. Some environments need software that has to be perfect, or as close as we can make it, such as space probes that have to run for years or decades, and implanted medical devices where a bug could kill the patient. Programmers have software design techniques for those environments, but they generally start with a clear model of what the environment is and what sort of threats the device will have to face. Then they write and test the code as completely as they can, and burn it into a read-only memory in the device, which prevents deliberate or accidental later changes to the code. Running an online cryptocurrency exchange is about as far from that model as one can imagine. The exchange's web site faces the Internet where one can expect non-stop hostile attacks using rapidly evolving techniques. The software that runs the web site and the databases is ordinary server stuff, reasonably good quality, but way too big and way too dynamic to allow the sorts of techniques that space probes use. Nonetheless, there are plenty of ways to try and make an exchange secure. A bitcoin exchange receives bitcoins and money from its customers, who then trade one for the other, and later ask for the results of the trade back. The bitcoins and money that the customers have sent stay in inventory until they're returned to the customers. If the exchange closes its books once a day, at that point the bitcoins in inventory (which are public since the bitcoin ledger is public) should match the amount the customers have sent minus the amount returned. Similarly, the amount in the exchange's bank account should match the net cash sent. The thing in the middle is a black hole since with most bitcoin exchanges you have no idea where your bitcoins or cash have gone until you get them back, or sometimes you don't. To make it hard to steal the bitcoins, an exchange might keep the inventory in a cold wallet, one where the private key needed to sign transactions is not on any computer connected to the Internet. Once a day they might burn a list of bitcoin withdrawals onto a CD, take the CD into a vault where there's a computer with the private wallet key, create and sign the withdrawal transactions, and burn them onto another CD, leave the computer, the first CD, and a copy of the second CD in the vault, and take the second CD to an online computer that can send out the transactions. They could do something similar for cash withdrawals, with a bank that required a cryptographic signature with a key stored on an offline computer for withdrawal instructions. None of this is exotic, and while it wouldn't make anything fraud-proof, it'd at least be possible to audit what's happening and have a daily check of whether the money and bitcoins are where they are supposed to be. But when I read about the endless stories of crooks breaking into exchanges and stealing cryptocurrencies from hot (online) wallets, it's painfully clear that the exchanges, at least the ones that got hacked, don't do even this sort of simple stuff. Admittedly, this would slow things down. If there's one CD burned per day, you can only withdraw your money or bitcoins once per day. Personally, I think that's entirely reasonable [...]



Will 5G Trigger Smart City PPP Collaboration?

2018-02-13T08:18:00-08:00

As discussed in previous analyses, the arrival of 5G will trigger a totally new development in telecommunications. Not just in relation to better broadband services on mobile phones — it will also generate opportunities for a range of IoT (internet of things) developments that among other projects are grouped together under smart cities (feel free to read 'digital' or 'connected cities'). The problems related to the development 5G infrastructure as well as to smart cities offer a great opportunity to develop new business models for both telecommunications companies as well as for cities and communities, to create win-win situations. 5G will require a massive increase in the number of infrastructure access points in mobile networks; many more towers and antennas will need to be installed by the telecommunications companies to deliver the wide range of services that are becoming available through this new technology. Furthermore, all the access points need to be connected to a fibre optic network to manage the capacity and the quality needed for the many broadband services that will be carried over it. This is ideal network structure for cities which require a very dense level of connectivity, but cities don't have the funds to make that happen. So telecommunications companies working together with cities could be a win-win situation. Cities that do have a holistic and integrated smart city strategy in place can take a leadership role by developing the requirements needed for a city-wide digital infrastructure that can provide the social and economic benefits for its citizens. The critical element of an integrated strategy is that it must cut through the traditional bureaucratic silo structures. 5G is an ideal technology for a range of city-based IoT services in relation to energy, environment, sustainability, mobility, healthcare, etc. Mobile network infrastructure (incl 5G) will generally follow the main arteries and hotspots of the city, where there at the same time is usually a range of city- and utilities-based infrastructure that can be used for 5G co-location. IoT is also seen by the operators as a new way to move up the value chain. But if we are looking at 5G as potential digital infrastructure for smart cities, the cities infrastructure requirements will need to be discussed upfront with the network operators who are interested in building 5G networks. By working with the cities, these operators instantly get so-called anchor tenants for their network, which will help them to develop the viable business and investments models needed for such a network. The wrong strategy would be put the requirements of the telecommunications before that of the cities. The development of 5G will take a decade or so (2020-2030), and it is obvious that cities that already have their strategic (holistic) smart city plans ready are in a prime position to sit down with the operators; and they will be among the first who will be able to develop connected cities for their people. This will, of course, create enormous benefits and will attract new citizens and new businesses, especially those who understand the advantages of living or being situated in such a digital place. MVNOs (mobile virtual network operators) are another potential winner in this development — they could specialise in what is needed to create a smart city, smart community, smart precinct, etc. Telecommunication companies AT&T and Verizon in the USA clearly see the opportunities to work with cities, however, this is mainly based on getting easy access [...]



Suggestions for the Cuba Internet Task Force

2018-02-13T07:18:00-08:00

The Cuba Internet Task Force (CITF) held their inaugural meeting last week. Deputy Assistant Secretary for Western Hemisphere Affairs John S. Creamer will chair the CITF, and there are government representatives from the Department of State, Office of Cuba Broadcasting, Federal Communications Commission, National Telecommunications and Information Administration and Agency for International Development. Freedom House will represent NGOs and the Information Technology Industry Council will represent the IT industry. They agreed to form two subcommittees — one to explore the role of media and freedom of information in Cuba and one to explore Internet access. The subcommittees are to provide preliminary reports of recommendations within six months, and the CITF will reconvene in October to review those preliminary reports and prepare a final report with recommendations for the Secretary of State and the President. They are soliciting public comments, looking for volunteers for service on the subcommittees and have established a Web site. I may be wrong, but it sounds like the subcommittees will be doing much of the actual work. The subcommittee on technological challenges to Internet access will include US technology firms and industry representatives and the subcommittee on media and freedom of information will include NGOs and program implementers with a focus on activities that encourage freedom of expression in Cuba through independent media and Internet freedom. They aim to maintain balance by including members from industry, academia and legal, labor, or other professionals. I hope the subcommittee on media and Internet freedom resists proposals for clandestine programs. Those that have failed in the past have provided the Cuban government with an excuse for repression and cost the United States money and prestige. Both the Cuban and United States governments have overstated what their impact would have been had they succeeded. Cuba's current Wi-Fi hotspots, navigation rooms, home DSL and 3G mobile are stopgap efforts based on obsolete technology, and they provide inferior Internet access to a limited number of people. (El Paquete Semanal is the most important substitute for a modern Internet in Cuba today). It would be difficult for the subcommittee on technological challenges to devise plans or offer support for activities the current Cuban government would allow and be able to afford. That being said, the situation may ease somewhat after Raúl Castro steps down in April. Are there short-run steps Cuba would be willing to take that we could assist them with? For example, the next Cuban government might be willing to consider legitimizing and assisting some citizen-implemented stopgap measures like street nets, rural community networks, geostationary satellite service and LANs in schools and other organizations. They might also be willing to accept educational material and services like access to online material from Coursera or LAN-based courseware from MIT or The Khan Academy. (At the time of President Obama's visit, Cisco and the Universidad de las Ciencias Informaticas promised to cooperate in bringing the Cisco Network Academy to Cuba, but, as far as I know, that has not happened). The US requires Coursera and other companies to block Cuban access to their services. That is a policy we could reverse unilaterally, without the permission of the Cuban government. Google is the only US Internet company that has established a relationship with and been allowed to install infrastr[...]