Subscribe: CircleID
http://www.circleid.com/rss/rss_intblog/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
broadband  china  circleid twittermore  data  domain  information  internet  mobile  networks  new  percent  providers  state 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2018-01-22T14:00:00-08:00

 



Montana Becomes First State to Require ISPs to Abide by Net Neutrality Principles Despite FCC Repeal

2018-01-22T14:00:00-08:00

Montana Governor Steve Bullock signs an executive order requiring ISPs with state contracts to adhere to internet neutrality principles. "There has been a lot of talk around the country about how to respond to the recent decision by Federal Communications Commission to repeal net neutrality rules, which keep the internet free and open. It's time to actually do something about it," said Governor Bullock. "This is a simple step states can take to preserve and protect net neutrality. We can't wait for folks in Washington DC to come to their senses and reinstate these rules." The Montana order prohibits service providers from blocking lawful content, throttle, impair or degrade lawful internet traffic on the basis of internet content, engage in paid prioritization, or unreasonably interfere or disadvantage the users’ ability to select, access, and use broadband internet access service.

Follow CircleID on Twitter

More under: Net Neutrality, Policy & Regulation




Next on the US Telecoms Agenda: Downgrading Broadband

2018-01-22T10:17:00-08:00

The American industry lobby (AT&T, Verizon, and Comcast) successfully pushed the regulator to get rid of net neutrality, but they are not stopping there. They can sense the opportunity under the Trump Administration to roll further back any regulations that stand in the way of maximising their profits. As all three largely enjoy geographic monopolies in their regions of operation, there is little competition driving innovation forward, so their aim is to milk the networks that they currently have in place for as long as possible. The ability for them to do so goes back to 1996 when the FCC declared that broadband was not a telecoms service and that telecoms regulations, therefore, did not apply (e.g., providing retail access to independent ISPs). In order to limit their misuse of their dominant positions the FCC, under the Obama Administration, introduced net neutrality; however this has now been scrapped under the Trump Administration. For more information on net neutrality see my last analysis on this topic. Next on the FCC's agenda is the downgrading of the definition of broadband. At the moment this stands at 25MB/s download and 4 MB/s upload. The incumbents are lobbying to bring that down to 10MB/s and 1 Mb/s. Then they could claim that they have fulfilled their broadband obligations since most of the landlines are already able to deliver those speeds, so they would not need to upgrade these networks any further, they would then use their mobile networks for higher-speed services. This would significantly increase the costs to those users who need daily broadband services for family use, entertainment, etc. (typically 25Mb/s+). Areas that are unable to get such landline-based services are increasingly being forced off the landline network and offered a mobile service instead. As is the case in most countries, in the United States broadband is more expensive when used over mobile networks, especially if one has to use the mobile connection as the only option for all their broadband requirements (e.g., Netflix, etc.) Another change that is being rumoured is the downgrading of the school broadband service (E-Rate). So far this service has successfully connected 90% of the schools in the USA, but constant updates and upgrades of this service are needed, and this is where it looks like the FCC will start cutting its funding. Initially, the FCC agreed with the industry suggestions to downgrade the broadband requirements. However, fortunately, there was a pushback last week with the FCC — be it reluctantly — stating that mobile broadband is not an alternative to fixed broadband and that they will not downgrade the current regulated broadband speed. Having said all of this, the number of people with low-level broadband requirements will find mobile broadband a great alternative as long as their broadband usage remains low to moderate. This market is estimated to grow between 15-30% of the overall fixed broadband market. Written by Paul Budde, Managing Director of Paul Budde CommunicationFollow CircleID on TwitterMore under: Access Providers, Broadband, Net Neutrality, Policy & Regulation [...]



Industrial Plant Attack Generates Renewed Concerns Over Critical Infrastructure Hacking Threats

2018-01-19T17:39:00-08:00

A recent malware attack on the control systems of an industrial plant has renewed concerns about the threat hacking poses to critical infrastructure. Lily Hay Newman reporting in the Wired: "while security researchers offered some analysis last month of the malware used in the attack, called Triton or Trisis, newly revealed details of how it works expose just how vulnerable industrial plants—and their failsafe mechanisms—could be to manipulation." Also noted, that "the malware targets the Triconex firmware vulnerability, manipulates the system to steadily increase its ability to make changes and issue commands, and then deposits the RAT, which awaits further remote instructions from the attackers."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Malware, Networks




Some Hackers Earning Over 16 Times That of Full-Time Software Engineers in Their Home Country

2018-01-19T13:40:00-08:00

(image) Geographic Money Flow – Visualization of the Bounties by Geography showing on the left where the companies paying bounties are located and on the right where hackers receiving bounties are located.

A report from one of the largest documented surveys conducted on the ethical hacking community reveals some hackers are earning over 16 times that of full-time software engineers in their home country. The study had 1,698 respondents and conducted by HackerOne, a global hacker community platform, which has seen a 10-fold increase in its registered users in the past two years.

Additional key findings:

— Nearly 1 in 4 hackers have not reported a vulnerability that they found because the company didn't have a channel to disclose it.

— Money remains a top reason for why bug bounty hackers hack, but it's fallen from first to fourth place compared to 2016. Above all, hackers are motivated by the opportunity to learn tips and techniques, with "to be challenged" and "to have fun" tied for second.

— India (23%) and the United States (20%) are the top two countries represented by the HackerOne hacker community, followed by Russia (6%), Pakistan (4%) and United Kingdom (4%).

— Nearly 58% of them are self-taught hackers. Despite 50% of hackers having studied computer science at an undergraduate or graduate level, and 26.4% studied computer science in high school or before, less than 5% have learned hacking skills in a classroom.

— While 37% of hackers say they hack as a hobby in their spare time, about 12% of hackers on HackerOne make $20,000 or more annually from bug bounties, over 3% of which are making more than $100,000 per year, 1.1% are making over $350,000 annually. A quarter of hackers rely on bounties for at least 50% of their annual income, and 13.7% say their bounties earned represents 90-100% of their annual income.

Follow CircleID on Twitter

More under: Cybersecurity




China Cloud Providers Catching Up to American Firms, Plus China Has Home Market Advantage

2018-01-19T12:34:00-08:00

"Chinese tech companies plan to steal American cloud firms' thunder," says The Economist. Alibaba has its goal set to match or surpass Amazon Web Services by 2019. "We have taken on Amazon on all fronts," says Alibaba's Mr. Hu. From the article: "Whichever firm ends up leading, Chinese and Western cloud providers are bound to run into each other — though not so much in their home countries as in such places as Europe and India. AWS and its main rivals have been busy building data centres abroad for some time, including in China. But Alibaba and Tencent are catching up. Alibaba, for example, operates a dozen computing plants abroad and will open another one this month in India, near Mumbai."

Follow CircleID on Twitter

More under: Cloud Computing




Tips for Ecommerce to Survive and Thrive with GDPR

2018-01-19T11:45:00-08:00

The regulatory environment for brands and retailers that do business online is getting stricter thanks to regulatory changes in Europe with the General Data Protection Regulation (GDPR), as well as existing regulations in the U.S. Companies that adapt quickly can turn these changes into a competitive advantage. As we grapple worldwide with the implications of the incredible amount of personal data generated every day, consumers are pressuring brands and legislators alike for more control over their information. This becomes increasingly complicated as a larger number of businesses pivot towards subscription models, where customer-brand relationships are fluid, longer-term, and involve more uses of personal data and consumer behavior information. Neglecting the privacy desires of these consumers puts brands at risk of everything from fines and penalties to a loss of trust with their customers. There are a number of key compliance obligations that organizations should consider as they adopt new business models and expand to new geographies. Get ready for GDPR The GDPR, passed by the European Parliament and Council in 2016, bolsters data protection measures for Europeans. The regulation, which becomes enforceable May 25, 2018, gives these individuals greater control over their personal data and is expected to simplify the regulatory environment for brands operating online by providing uniformity across Europe. The ripples caused by this legislation will reach every corner of the global retail market, including the U.S. According to Ovum, 70 percent of global IT decision-makers expect to increase spending to meet data protection requirements. The GDPR will force companies that process or receive European data (even if your business is located outside Europe) to transform their information handling practices to meet a new, higher standard. For instance, part of the regulation calls for data portability, allowing an individual to request transfer of their personal data from one processing system to another in a commonly used format. Though this regulation is not enforceable for a few months, brands that process European data should already be preparing. Once the regulations go into effect, the penalties are steep. Organizations that do not comply with certain GDPR articles can incur fines of 20 million euros, or 4 percent of total global revenue, whichever is greater. In the U.S., no state is the same In the U.S., there is no single, comprehensive federal law like the GDPR that regulates the collection and use of personal data. Instead, the U.S. has a patchwork system of federal and state laws and regulations that sometimes overlap. Many guidelines have been developed by governmental agencies and industry groups, but they are not enforceable by law. They are however, part of self-regulatory guidelines considered "best practices." These frameworks include accountability components increasingly used as a tool for regulatory alignment. Although there isn't a comprehensive federal U.S. data privacy law, there are a number of federal privacy-related laws that regulate the collection and use of personal data. Some apply to particular categories of information, such as financial or health data, or electronic communications. Others apply to activities that use personal information, such as telemarketing and commercial email. Particular states like California require websites that collect user data to communicate the type of information being collected, the types of third-parties they might share that information with, and their online tracking practices. Connecticut and Massachusetts also have stringent laws protecting consumers' data and requiring companies to safeguard that information. The risk of noncompliance The penalties for noncompliance vary depending on the type and severity of the violation, ranging, for example, from very high fines and delays in payment processing t[...]



China Stepping Up Cryptocurrency Crackdown

2018-01-17T06:45:00-08:00

China is preparing for a new crackdown on cryptocurrency, planning to stamp out remaining trading in the country, according to state media. From the AFP report via Channel NewsAsia: "China will gradually clean up over-the-counter trading platforms, peer-to-peer networks where large exchanges occur and firms registered in the country which allow Chinese to trade overseas, the state-run Securities Journal said Tuesday. The publication cited an anonymous source close to regulators tackling online finance risks. The new plan follows China's crackdown on cryptocurrency trading last year, which saw Beijing shut down bitcoin exchanges and ban all initial coin offerings."

Follow CircleID on Twitter

More under: Blockchain, Policy & Regulation




Preventing 'Techlash' in 2018: Regulatory Threats

2018-01-16T10:55:00-08:00

U.S. Chamber of Commerce President Thomas J. Donohue on January 10, 2018, warned that "techlash" is a threat to prosperity in 2018. What was he getting at? A "backlash against major tech companies is gaining strength — both at home and abroad, and among consumers and governments alike." "Techlash" is a shorthand reference to a variety of impulses by government and others to shape markets, services, and products; protect local interests; and step in early to prevent potential harm to competition or consumers. These impulses lead to a variety of actions and legal standards that can slow or change the trajectory of innovations from artificial intelligence to the Internet of Things (IoT) to business process improvements. According to Mr. Donohue, "[w]e must be careful that this 'techlash' doesn't result in broad regulatory overreach that stifles innovation and stops positive advancements in their tracks." Here are a few examples of the challenges ahead: Global privacy and security regulations impose compliance obligations and erect barriers to the free flow of data, products, and services. Examples include the European Union's General Data Protection Regulation (GDPR), its Network Information Security Directive (NIS Directive), e-Privacy initiative, and a nascent effort on IoT certifications. "A growing number of countries are making it more expensive and time consuming, if not illegal, to transfer data overseas." [1] China's new cyber law "requires local and overseas firms to submit to security checks and store user data within the country." [2] Such efforts may be intended to level the playing field with large U.S. technology companies, but whatever their impetus, they create enormous compliance costs and impediments to multinational operations. [3] Emerging regulation around the world may do more harm than good, particularly to U.S.-based organizations. Premature regulation and oversight drives up the costs of doing business, particularly for new entrants or disruptors. Government should act only when it has evidence of actual harms to consumers or competition and the benefits outweigh the costs. When government rushes in with a technical mandate, innovation suffers. Likewise, if the government demands business changes without evidence of anti-competitive effects, it distorts the marketplace. Premature regulations impose unnecessary compliance burdens, so governments should exercise "regulatory humility” and wait for experience and evidence. Unjustified class action litigation over technology strikes fear in the hearts of innovators. The growth of "no injury" lawsuits in targeting the technology sector likewise is a concern. Class action plaintiffs were quick to sue GM and Toyota after news reports of a vulnerability in Jeeps, and dozens of plaintiffs immediately sued Intel after chip processor vulnerabilities named Meltdown and Spectre were reported. [4] While courts have generally rejected suits based on "risk of hacking," [5] plaintiffs continue to push these theories, along with novel "economic loss" claims from "overpaying for" [6] vulnerable devices. Legal uncertainty about such claims, and the rush to obtain damages awards and attorneys' fees, threatens to increase costs and chills companies' willingness to engage. State laws, such as those attempting to impose "net neutrality" and online privacy obligations at the state level, threaten to balkanize regulation of technology. "Lawmakers in at least six states, including California and New York, have introduced bills in recent weeks that would forbid internet providers to block or slow down sites or online services." [7] State-by-state regulation of global ISP and carrier network practices is likely to create major inefficiencies. Likewise, state privacy laws create complexity for organizations whose operations, products, and customers cross state lines. Industry [...]



The Over-Optimization Meltdown

2018-01-16T09:44:00-08:00

In simple terms, Meltdown and Spectre are simple vulnerabilities to understand. Imagine a gang of thieves waiting for a stage coach carrying a month's worth of payroll. There are two roads the coach could take, and a fork, or a branch, where the driver decides which one to take. The driver could take either one. What is the solution? Station robbers along both sides of the branch, and wait to see which one the driver chooses. When you know, pull the resources from one branch to the other, so you can effectively rob the stage. This is much the same as a modern processor handling a branch — the user could have put anything into some field or retrieved anything from a database, that might cause the software to run one of two sets of instructions. There is no way for the processor to know, so it runs both of them. To run both sets of instructions, the processor will pull in the contents of specific memory locations, and begin executing code across these memory locations. Some of these memory locations might not be pieces of memory the currently running software is supposed to be able to access, but this is not checked until the branch is chosen. Hence a piece of software can force the processor to load memory it should not have access to by calling the right instructions in a speculative branch, exposing those bits of memory to be read by the software. But my point here is not to consider the problem itself. What is more interesting is the thinking that leads to this kind of software defect being placed into the code. There are, in all designs, tradeoffs. For instance, in the real (physical) world, there is the tradeoff between fast, cheap, and quality. In the database world, there is the tradeoff between consistency, accessibility, and partitionability. I have, for many years, maintained that in network design, there is a tradeoff between state, optimization, and surfaces. What meltdown and spectre represent is the unintended consequence of a strong drive towards enhancing performance. It's not that the engineers who designed speculative execution, and put it into silicon, are dumb. In fact, they are brilliant engineers who have helped drive the art of computing ever faster forward in ways probably unimaginable even twenty years ago. There are known tradeoffs when using speculative execution, such as: Power – some code is going to be run, and the contents of some memory fetched, that will not be used. Fetching these memory locations, and running this code, is not free; there is some amount of power used, and heat generated, in speculative execution. This was actually a point of discussion early in the life of speculative execution, but the performance gains were so solid that the power and heat concerns were eventually set aside. Real Estate – speculative execution requires physical real estate in the processor. It makes processors larger, and uses silicon gates that could be used for something else. Overall, the most performance enhancing use of the available real estate was shown to be the most economically useful, and thus speculative execution became an important part of chip design. State – speculative execution drives the amount of state, and the speed at which that state is changing, much higher than it would otherwise be. Again, the performance gains were strong enough to make the added state worth the effort. There was one more tradeoff, we now know, that was not considered during the initial days and years when speculative execution was being discussed — security. So maybe it is time to take stock and think about lessons learned. First, it is always the unexpected consequence that will come back to bite you in the end. Second, there is almost always an unexpected consequence. The value of experience is in being bitten by unexpected consequences enough times [...]



What is the Future for Mobile Network Operators?

2018-01-16T08:07:00-08:00

The telecommunication industry is continuing to resist structural changes, but the reality is that if they don't transform, technology will do it for them. We have seen the fixed telecom operators slowly being pushed back into the infrastructure utility market. Mobile networks are moving in that same direction — that is, the largest part of their network will be a utility, with currently two, three or four mobile infrastructure providers per country and little economic sense for overbuilding the basic infrastructure, the industry is facing serious problems. The major use of mobile phones is in data (95%+) and the use of data will only increase. This means that in order to handle the capacity, the underlying backbone network needs to be fibre-based. Already most mobile base stations in cities are linked to fibre optic networks, with more and more connected to fibre year by year. 5G will only speed up that development as only a fibre network will be able to handle the increased broadband traffic. Independent mobile tower operators are increasingly taking over more and more of these towers, as it doesn't make economic sense for every mobile operator to deploy its own towers and build its own fibre network to these towers. With mobile towers closer and closer to the end users, there is an opportunity for the mobile operators to offer effective high-speed broadband competition with the fixed network operators. Most fixed network connections are still based on the copper network, and this creates an opportunity for mobile operators with deep fibre networks and, for example, a last mile 5G connection. The next step for the fixed operators will be to decide how to proceed beyond the copper or the HFC networks. Fibre-to-the-home and fibre-to-the-curb are the most likely options here, and 5G is certainly one of the options to drive proper high-speed broadband into people's homes. So a convergence is taking place. Increasingly the backbone networks for the mobile operators will mimic the fixed ones, and there will be little room for duplication of these networks. So wireless will increasingly be a retail element rather than a basic network. These changes are already reflected in the evaluation of some of the mobile network operators (MNOs) and mobile tower operators. Those of the MNOs are going down while those of the tower operators are going up. This is a clear indication of where the financial market sees the market moving to. So, rather than resisting change, the MNOs should take a leadership role in the transformation of the industry. Especially with the inevitable changes that are going to occur in the fixed network. A holistic approach will be needed instead of one that is aimed at protecting a sharp divide between mobile and fixed networks. Written by Paul Budde, Managing Director of Paul Budde CommunicationFollow CircleID on TwitterMore under: Access Providers, Mobile Internet, Networks, Wireless [...]



Recalling 2017: The Year in Domain Data

2018-01-16T07:53:00-08:00

It is safe to say that 2017 was a turbulent year in more ways than one. There was the ongoing clash between WHOIS information and user privacy, the hope that top-level domains would finally take off and multiple hacks of large corporations that reignited talks about cybersecurity. While many of these topics are essential and will likely resurface again in the coming year, it is also important to look back at 2017 through unambiguous data. That is why we have analyzed more than 60 million domains we found and indexed for the first time in 2017 for comprehensive insights. Generic top-level domains maintain their popularity It was June 2017 when the capacity of the dot Com domain became a point of contention. An article published by Quartz claimed that we were running out of useable dot Com names rapidly, and it seemed to cause a stir in other media. It was also, however, soon debunked by those mentioning the domain aftermarket and the frequent reselling of valuable domain names. If we look back at last year, it's clear that the end of the dot Com domain is indeed not yet here. The most successful top-level domain was still dominant among new websites and used for almost 60 percent of all domains created last year. Nearly half of these domains were registered in the United States, followed by 16 percent in China and only 5 percent in Canada. Other generic top-level domains seemed to rule as well, with gTLDs making up roughly 70 percent of all registered domains in 2017. They were followed by new generic top-level domains with a market share of 17 percent, with the most registrations for dot Loan and dot XYZ. Country-code TLDs were falling behind and accounted for only 14 percent of all created domains. No stopping eCommerce growth It will come as no surprise that the online retail industry is still on the up. Many reports already mentioned this, and our data can confirm these claims. We noticed almost 660000 new online stores in our database during 2017. These mostly appeared in the United States (28.1 percent), but China (18.5 percent) and Canada (10.3 percent) also played a big part in last year's eCommerce developments. It seems that payment provider PayPal is still a considerable player in the market. Their payment method was available in 70 percent of all online stores created last year. In comparison: Chinese competitor AliPay was detected in only 5 percent of eCommerce websites founded in 2017. Almost a third of these online stores preferred an in-house developed or moderated shopping cart system, but we also saw a definite rise of the out-of-the-box eCommerce solution. Systems like Shopify (24.1 percent) and WooCommerce (15.7 percent) had a significant share of the market. What is perhaps more surprising given the recent popularity of cryptocurrency, is the fact that the digital coin's rise to fame does not seem to translate into actual use. Of all online stores first created in 2017, only 2.5 percent accepts Bitcoin in return for their goods or services. This is a very slim share compared to the availability of PayPal (70 percent), Visa (60 percent) and MasterCard (46 percent) as payment options. Almost 60 percent of websites developed There is, of course, a big difference between registering a domain name and developing that domain into a site. In 2017, we saw that just 60 percent of all domains turned into a website. This number corresponds to the overall average we see in our entire database — around 60 percent of all domains become websites. Of these developed websites, WordPress was undoubtedly the most popular CMS used. Almost 55 percent of the sites built using this system, which has a massive lead over competitors such as WIX (11 percent) and Squarespace (7 percent). The most popular scripting language foun[...]



In Memoriam: UDRPsearch.com

2018-01-15T06:46:00-08:00

I have hesitated in writing this memorial for because I did not want to announce a demise that may not be true or the fear that my saying it will make it so. The website went dark for a short period in 2017, before being restored after a brief shutdown, and (I thought) it could happen again. I was waiting for history to repeat itself. But, the website remains dark, without explanation, and I fear it will not return. We lost it on or about January 6, 2018. I did not record when first appeared, but my guess is somewhere around 2010 or 2012. It has been an invaluable research tool for the reason that it made basic information accessible across providers so that if I wanted to find cases that contained a certain word ("denied" or "credible" for example) it brought me all of them. For his having created and maintained I thank you Dave Lahoti (Virtual Point), and I am sure the domain name and trademark community and those following the development of domain name jurisprudence (including innocent research drudges) thank you also for your generosity in providing and maintaining the database for so many years. Its loss is (will be) lamented by all! (This is not an obituary of Mr. Lahoti who is very much alive, and for whom I wish many more years of entrepreneurial success). is not the only research tool to publish domain name decisions. Paragraph 4(j) of the Uniform Domain Name Dispute Resolution Policy (Policy) mandates in part that "All decisions under this Policy will be published in full over the Internet." Also, Rule 16(b) of the Rules of the Policy states that "the Provider shall publish the full decision and the date of its implementation on a publicly accessible web site." Each of the providers maintains search functionality for decisions filed by their Panels, all free as mandated (not equally good, incidentally), but until the fatal day only captured decisions from all the providers in a consolidated database. (ICANN had tried such a consolidation (I remember) but abandoned it.) The home page now resolves to a page that proclaims "Think Outside the Dot," which is all very well, but not so wonderful if you want to think Within the Dot. In its plain-vanilla way, it was the best of all, and I mourn its demise. The reader may ask, why lament something that is only "plain-vanilla"? The answer lies partly in Mr. Lahoti's genius for simply collecting information and in another part also in his modest goals. is (was) simply a collection of information, awake all the time, receiving and announcing new cases and new decisions (UDRP and URS), providing in column form domain names "transferred," "denied," or "withdrawn," and who won and who lost. All very basic; very simple. And just like the myth of Earth ultimately resting on firm ground with a column of "turtles all the way down" comprehended the present, the intermediate, and the remote past (all the way down to the first decided case), and not just from providers that are still with us. It was a database of decisions without discrimination of their sources. It had a search field that was primitive (searching for one word!) but adequate. We will only see its light again if a patron steps forward. There are, of course, a number of subscription domain name search resources but they are not really within reach of small investors and advisors. A recent free service collects only WIPO decisions (DNDisputes.com), but is useful because it provides a statistical breakdown of information. There is also a service that collects reverse domain name hijacking cases, Reverse Domain Name Hijacking Information. On top of this, there[...]



Using Gerrymandering Technology to Fight Gerrymandering

2018-01-14T19:13:00-08:00

In 1991, eight high-level Soviet officials attempted a coup that failed after two days. During those two days, citizen journalists and activists used Usenet newsgroups to carry traffic into, out of and within Russia (70 cities). News spread and protests were organized in Russia. In the west, we saw images of Boris Yeltsin speaking to demonstrators while standing on top of a tank and the Russians saw that we were aware of and reporting on the coup. The coup was defeated, democracy prevailed, and we naively concluded that computer networks were a tool for democracy, political transparency, freedom of speech, etc. We also believed dictators would face a dilemma — having to accept democratic information sharing in order to reap the economic and social benefits of the Internet. Today it is clear that we naively overlooked the fact that the Internet is a useful tool for dictators as well as Democrats. We have seen it used by terrorists to target rockets and for censorship, propaganda, surveillance and lying. Another anti-democratic political practice, gerrymandering — defining voting districts to favor one party or candidate — is in the news because a panel of federal judges has ordered North Carolina to redraw its gerrymandered congressional map. The panel struck down North Carolina's congressional map, saying it was unconstitutional because it violates the 14th Amendment guarantee of equal protection. Judge James A. Wynn Jr., in a biting 191-page opinion, said that Republicans in the North Carolina legislature had been "motivated by invidious partisan intent" as they carried out their obligation in 2016 to divide the state into 13 congressional districts, 10 of which are held by Republicans. The ruling will be appealed directly to the Supreme Court, which is also hearing Wisconsin and Maryland gerrymandering cases. The Wisconsin case is similar to South Carolina's, which is based on the 14th amendment, challenges the state district map and is pro-Democratic while the Maryland case challenges the redrawing of a single district, is based on the 1st Amendment and is pro-Republican. Gerrymandering is not new — Patrick Henry tried to defeat James Madison in 1788 by drawing an anti-federalist district. He failed because he did not have good data and computers, but today's politicians have geographic information system software and the data they need to automate efficient, precise gerrymandering. The Republican party has used Internet-enabled gerrymandering to gain a congressional advantage. The Democratic party might be tempted to fight fire with fire, but that would be slow and undemocratic. The North Carolina judicial panel has a better solution. They gave the legislature until January 24 to present a "remedial plan," and the court will institute its own map if it finds the new district lines unsatisfactory. If that happens, the court can use use the same sorts of tools and data that have been used to produce gerrymandered districts. Instead of using the technology to optimize in favor of either party, they will seek maps that equalize district populations, minimize geographic perimeters, respect natural boundaries like rivers, maximize racial diversity, etc. In general, courts are more likely to be non-partisan than legislatures. As I said at the start, the Internet is a tool that can be used by good guys and bad guys. Update, Jan 19, 2018: The U. S. Supreme Court granted a stay in the court order requiring North Carolina lawmakers to produce a revised congressional voting map within two weeks. This temporary delay probably means the current map will be used in the 2018 election. In a related case, the Pennsylvania state supreme court is currently hearing a g[...]



Hackers Hijack DNS Server for Cyrptocurrency Wallet BlackWallet, Over $400K Stolen From Users

2018-01-14T19:02:00-08:00

Unknown hackers (or hacker) have hijacked the DNS server for BlackWallet.co, a web-based wallet application for the Stellar Lumen cryptocurrency (XLM). Catalin Cimpanu reporting in Bleeping Computer: "The attack happened late Saturday afternoon (UTC timezone), January 13, when the attackers hijacked the DNS entry of the BlackWallet.co domain and redirected it to their own server. 'The DNS hijack of Blackwallet injected code [said Kevin Beaumont] a security researcher who analyzed the code before the BlackWallet team regained access over their domain and took down the site ... If you had over 20 Lumens it pushes them to a different wallet… the attacker collected 669,920 Lumens, which is about $400,192 at the current XML/USD exchange rate."

Follow CircleID on Twitter

More under: Blockchain, Cyberattack, DNS




A Tipping Point for the Internet: 10 Predictions for 2018

2018-01-13T14:37:00-08:00

The year 2018 represents a tipping point for the Internet and its governance. Internet governance risks being consumed by inertia. Policy decisions are needed if we want to prevent the Internet from fragmenting into numerous national and commercial Internet(s). Geopolitical shifts, in particular, will affect how the Internet is governed. The Internet is made vulnerable by the fragmentation of global society, which is likely to accelerate in response to the ongoing crisis of multilateralism. If this crisis leads to further restrictions in the movement of people, capital, and goods across national borders, the same is likely to happen with the digital economy, including the cross-border flow of data and services. Filling policy gaps The first sign of a crisis in multilateralism in digital policy was the failure of the 5th UN Group of Governmental Experts (UN GGE) to reach consensus on a final report. Towards the end of 2017, the World Trade Organization (WTO) failed to agree on any mandate for e-commerce negotiations during the WTO Ministerial meeting in Buenos Aires. The gaps in global rules are increasingly being filled by bilateral and regional arrangements, in particular on cybersecurity and e-commerce. Plurilateral digital trade arrangements are being considered as an alternative to the shortcomings of the WTO e-commerce negotiations. In 2018, national legislation and courts will have a major impact on the global Internet. The main regulation with global impact will be the entry into force of the EU's General Data Protection Regulation on 25 May, which will determine how data is governed beyond the shores of Europe. Using divergences to reach convergences There are a few elements on which to build constructive solutions and some optimism. First, interests in digital policy are now more clearly defined than a few years ago, when digital ideologies focused only on blue-sky thinking and an 'unstoppable march into a bright digital future'. Governments need to deliver prosperity, stability, and security as part of their social contracts with citizens. The industry needs to make a profit, whether it is by selling services online or by monetizing data. Citizens have a strong interest in having their dignity and core human rights protected online as they should be offline. A common thread binds them all: actors have a strong interest in preserving a safe, stable, and unified Internet. A clear delineation of the interests of all actors, a healthy interdependence, and complementarity between those actors is a good basis for negotiations, compromise, and ideally, consensus, on how the Internet should further develop as a technological enabler of a stable and prosperous society. Secondly, the diversity of the Internet is reflected in the diversity of interests and, ultimately, negotiating positions in digital geo-politics. While the USA, China, and Russia disagreed on the future of cybersecurity regulation within the UN GGE, they did agree about the need for digital commerce regulation in the WTO. All three countries are part of the WTO plurilateral negotiations on digital commerce. This variable geometry in the positions of the main actors in digital policy could create more space for potential trade-offs and compromise. The 2018 forecast of the 10 main digital policy developments is set against this broad backdrop that makes progress and retreat equally possible. It draws on continuous monitoring of digital policy carried out through the GIP Digital Watch observatory and further discussed during the GIP's monthly briefings. For a more in-depth analysis, read the full article. * * * 1. GDPR: Data in the centre of digital poli[...]



The Meeting That Changed the DARPA Datagram Internet

2018-01-13T13:51:00-08:00

The National Science Foundation awarded a small contract to the IEEE to host a small two-day meeting on 30 Sept 1994 of selected invitees at the IEEE's Washington DC 18th Street offices on "Name Registration For The '.COM' Domain." Being part of the InterNIC contract oversight committee, I was one of the eight invitees. It turned out in many ways to be the single most important meeting in the long, checkered history of what is today referred to as "the internet," that made an extraordinarily bad decision. Prelude What is today commonly referred to as "the internet," traces its origins back to a 1972 project undertaken by Bob Kahn shortly after he took over the Information Processing Techniques Office (IPTO) within DARPA from the legendary Larry Roberts, to build on the datagram internet ideas of France's eminent researcher Louis Pouzin undertaken previous year. DARPA Director Stephen Lukasik approved and funded the effort, and the TCP/IP technique was first published in 1974 at the "host-to-host" protocol. (Twenty years later, Director Emeritus Lukasik would come to regret that approval and led the first efforts to deal head-on with the profound national infrastructure protection and cybersecurity threats that were already emerging in the mid-90s. Ten years after then, for similar reasons, Larry Roberts would attempt to introduce a secure internet datagram protocol in the ITU-T.) Sometime around 1980, Kahn's protocol began to be called the "DARPA internet" and generated minor interest within the U.S. DOD and research communities - even as the banking community amusingly trademarked the term for their global ATM network internet. Pouzin's datagram internet ideas captivated and drove research establishments around the world to develop many diverse datagram internet protocols. It resulted in the U.S. chief national security networking office, NCS, to declare in 1976 that the protocols should form the backbone for critical national infrastructure in the U.S. The result was a widespread effort undertaken among all the major companies, research establishments, and national governments to cooperate internationally through the ITU and ISO to establish a broad array of formal standards to implement all the elements for trusted, secure, national and global datagram internet infrastructures for public use. This included transport and network security, trusted eMail, PKI encryption, directory, IoT, and identity management services to support an array of offerings including "web-like" services. The specifications are still all there in the X-Series Recommendations. This was collectively known as the OSI internet. The U.S. commitment to the OSI internet also extended to joining with most of the world's nations to cooperate in a 1988 treaty conference in Melbourne to enable datagram internet services to be publicly deployable globally, as they're use for public access was unlawful — especially because of cybersecurity concerns. The potentially dire consequences of datagram internets were underscored by the release of the first large-scale attack known as the Morris Worm in the weeks preceding the Melbourne Conference on the DARPA internet. This resulted in negotiations instantiating an array of cybersecurity provisions in the treaty as a quid pro quo for legalizing global public internets. The DARPA internet platform, however, continued to have ardent followers within the academic networking research communities — especially those funded by the National Science Foundation plus some counterparts in other countries. The platform was especially attractive because it was a kind of completely open, free, anarchy amon[...]



DOJ Closes Probe of VeriSign Over .Web TLD

2018-01-12T15:23:00-08:00

The Justice Department has closed its investigation into VeriSign Inc.'s involvement in an auction for the .web internet domain. Alexis Kramer reporting in BNA: "The department's antitrust division sent VeriSIgn, a Reston, Va.-based internet infrastructure provider, a civil investigative demand in January 2017 after the results of the .web auction. The DOJ told VeriSign Jan. 10 the investigation is closed, VeriSign said in a Securities and Exchange Commssion filing. .Web applicant Nu Dot Co LLC had won the domain for $135 million in an auction run by the Internet Corporation for Assigned Names & Numbers [ICANN] ... VeriSign announced days later that it had provided funds for Nu Dot Co's bid and planned to acquire the rights to the domain. VeriSign hadn't applied for .web. The auction spurred a lawsuit against ICANN by domain name registry Donuts Inc., one of six other .web applicants."

Follow CircleID on Twitter

More under: Law, Registry Services, New TLDs




China Sends a Wake-Up Call to All Multinationals - Are You Awake?

2018-01-12T13:22:00-08:00

If you visit Marriott's China website today, you're likely to see this (screenshot below). I dumped the text within this page into Google Translate and included below is what it loosely said. Loose translation: We are currently updating the website. For reservations, Please call: Mainland China: 4008 844 371 Hong Kong Special Administration Region of China: 800 908 290 Or visit www.marriott.com Marriott International respects China's sovereignty and territorial integrity. We will absolutely not support any separatists organizations that will undermine China's sovereignty and territorial integrity. We apologize for any act that may give rise to misunderstandings in the above position. So what exactly happened here? According to Skift, Marriott sent a survey in Mandarin to its Chinese loyalty members that referred to Tibet, Macau, and Taiwan as "countries." As readers of this site might know quite well by now, in the eyes of Chinese authorities, this is no trivial oversight. It appears that this shutdown could last a week. I can only imagine the lively conversations being held at the highest levels within Marriott right now. This should be a wake-up call to all organizations I'm working on the 2018 edition of the Web Globalization Report Card and have compiled a list of a number of websites that are currently vulnerable to the wrath of China. For the record, I don't agree with China. And I know many execs at Western-based multinationals don't as well. But it doesn't matter what we think. If you want to do business in China, you have to play by its rules. In Marriott's defense, its website did not list Taiwan as a country — but it appears that someone in marketing was not well versed on this very delicate geopolitical issue. This would be a good time for any company that does business, not just in China, but anywhere outside of its native country, to consider planning regular Globalization Summits. I've participated in a number of these over the years and find they go a long way in raising awareness to a range of geopolitical issues — as well as the sharing of best practices. Contact me if you'd like more information — I also now include copies of Think Outside the Country. Written by John Yunker, Author and founder of Byte Level ResearchFollow CircleID on TwitterMore under: Censorship, Internet Governance, Policy & Regulation, Web [...]



New Harvard Study Recognizes Community-Owned Internet Service Providers as Value Leaders in America

2018-01-11T17:52:00-08:00

Community-owned fiber networks provide least-expensive local "broadband," according to a recent study by Harvard's Berkman Klein Center. From the report, David Talbot, Kira Hessekiel, and Danielle Kehl write: "We examined prices advertised by a subset of community-owned networks that use fiber-to-the-home (FTTH) technology. In late 2015 and 2016 we collected advertised prices for residential data plans offered by 40 community-owned (typically municipally-owned) FTTH networks. We then identified the least-expensive service that meets the federal definition of broadband (regardless of the exact speeds provided) and compared advertised prices to those of private competitors in the same markets. We were able to make comparisons in 27 communities and found that in 23 cases, the community-owned FTTH providers' pricing was lower when the service costs and fees were averaged over four years. (Using a three year-average changed this fraction to 22 out of 27.) In the other 13 communities, comparisons were not possible, either because the private providers' website terms of service deterred or prohibited data collection or because no competitor offered service that qualified as broadband. We also found that almost all community-owned FTTH networks offered prices that were clear and unchanging, whereas private ISPs typically charged initial low promotional or "teaser" rates that later sharply rose, usually after 12 months."

Follow CircleID on Twitter

More under: Access Providers, Broadband




New UDRP Filing Fees at Czech Arbitration Court

2018-01-11T11:04:00-08:00

The Czech Arbitration Court (CAC) has long offered the least expensive (by far) filing fees for complaints under the Uniform Domain Name Dispute Resolution Policy (UDRP), but its fee are about to become more expensive, at least in most cases. CAC's base UDRP filing fee (for a dispute involving up to five domain names and a single-member panel) will increase on February 1, 2018, from 500 euros to 800 euros. As of this writing, that's equivalent to about U.S. $600. "This fee schedule reflects more the actual costs and time spent on the proceedings," according to a statement on the CAC website. While the increase is significant — sixty percent — it's still the cheapest among all five of the UDRP service providers, whose base fees are shown here (in U.S. dollars): WIPO: $1,500 ACDR: $1,500 The Forum: $1,300 ADNDRC: $1,300 CAC: $600 (approx.) Fees at all of the UDRP providers increase as the number of disputed domain names increase, as well as if a three-member (instead of single-member) panel is selected by the Complainant or Respondent. However, CAC is the only UDRP provider that has two fee schedules depending upon the complexity of the proceeding and whether a response is filed. The 800 euro filing fee at CAC is described as an "initial" fee. An "additional" fee will continue to apply if a response is filed or if "the Panel determines that it is appropriate for the Complainant to pay the Additional UDRP Fees, having regard to the complexity of the proceeding." Interestingly, while the initial fee is increasing from 500 euros to 800 euros, the additional fee is decreasing (as of February 1, 2018) from 800 euros to 300 euros. Therefore, the combined initial and additional fees for a base UDRP case at CAC will decrease from 1,300 euros to 1,100 euros. However, I believe that few cases historically have been required to pay an additional fee, so — if that practice continues — most complainants at CAC will be subject to a higher (initial-only) fee. CAC's low filing fees have been attractive to some trademark owners, with about 240 decisions in 2017. Still, WIPO and the Forum remain the most popular UDRP providers, handling thousands of cases each year. Of course, filing fees are just one factor to consider when choosing a UDRP service provider. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Domain Management, Domain Names, UDRP [...]