Subscribe: CircleID
http://www.circleid.com/rss/rss_intblog/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
access  decisions  dns  domain  internet  mobile  network  networks  new  plan  providers  record  search  service  udrp 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: CircleID

CircleID



Latest posts on CircleID



Updated: 2017-08-16T09:38:00-08:00

 



The Internet is Dead - Long Live the Internet

2017-08-16T09:38:00-08:00

Back in the early 2000s, several notable Internet researchers were predicting the death of the Internet. Based on the narrative, the Internet infrastructure had not been designed for the scale that was being projected at the time, supposedly leading to fatal security and scalability issues. Yet somehow the Internet industry has always found a way to dodge the bullet at the very last minute. While the experts projecting gloom and doom have been silent for the good part of the last 15 years, it seems that the discussion on the future of the Internet is now resurfacing. Some industry pundits such as Karl Auerbach have pointed out that essential parts of Internet infrastructure such as the Domain Name System (DNS) are fading from users' views. Others such as Jay Turner are predicting the downright death of the Internet itself. Looking at the developments over the last five years, there are indeed some powerful megatrends that seem to back up the arguments made by the two gentlemen: As the mobile has penetrated the world, it has created a shift from browser-based services into mobile applications. Although not many people realize this, the users of mobile apps do not really have to interface the Internet infrastructure at all. Instead, they simply push the buttons in the app and the software is intelligent enough to take care off the rest. Because of these developments, key services in the Internet infrastructure are gradually disappearing from the plain sight of the regular users. As Internet of Things (IoT) and cloud computing gain momentum, the enterprise side of the market is increasingly concerned about the level of information security. Because the majority of these threats originate from the public Internet, building walls between private networks and the public Internet has become an enormous business. With emerging technologies such as Software-Defined Networking (SDN), we are now heading towards a world littered with private networks that expand from traditional enterprise setups into public clouds, isolated machine networks and beyond. Once these technology trends have played their course, it is quite likely that the public Internet infrastructure and the services it provides will no longer be directly used by most people. In this sense, I believe both Karl Auerbach and Jay Turner are quite correct in their assessments. Yet at the same time, both the mobile applications and the secure private networks that move the data around will continue to be highly dependent on the underlying public Internet infrastructure. Without a bedrock on which the private networks and the public cloud services are built, it would be impossible to transmit the data. Due to this, I believe that the Internet will transform away from the open public network it was originally supposed to be. As an outcome of this process, I further believe that the Internet infrastructure will become a utility that is very similar to the electricity grids of today. While almost everyone benefits from them on daily basis, only electric engineers are interested in their inner workings or have a direct access to them. So essentially, the Internet will become a ubiquitous transport layer for the data that flows within the information societies of tomorrow. From the network management perspective, the emergence of the secure overlay networks running on top of the Internet will introduce a completely new set of challenges. While network automation can carry out much of the configuration and management work, it will cause networks to disappear from the plain sight in a similar way to mobile apps and public network services. This calls for new operational tools and processes required to navigate in this new world. Once all has been said and done, the chances are that the Internet infrastructure we use today will still be there in 2030. However, instead of being viewed as an open network that connects the world, it will have evolved into a transport layer that is primarily used for transmitting encrypted data. The Internet is Dead — Long Live t[...]



The Sustained Potential and Impact of Mobile & Wireless Technologies Access for Emerging Economies

2017-08-16T09:19:00-08:00

I believe Mobile Information and Communications Technologies (ICTs) are and very well remain powerful and best-suited technologies that will help provide connectivity and digital access in a much faster and cheaper way for developing countries of the globe. Thus, they are to be leveraged within their most strategic and profitable functional or usage contexts. Mobile access technologies along with relevant innovations have formed a powerful springboard for the Internet to be significantly accelerated in terms of access, usage and penetration. The mobile access paradigm is one that has forged a novel and solid path to bridging the digital divide and creating a multitude of opportunities to transform and empower populations in developing countries. With reports of the total number of smartphones globally reaching 1 billion users mainly driven by growth in emerging markets, there is great growth potential and opportunities for pre-existing challenges to be addressed. Mobile communications technologies today come as a continuum of options offering appropriate solutions for various needs, and present a future that is full of better and more powerful technologies. With Mobile Broadband starting to make way into different markets through new technologies such as 4G/5G promising faster and richer data communications, mobile is poised to yet revolution communications in developing countries. In fact, mobile-based e-applications including m-agriculture, m-commerce, m-banking, m-government, m-health and m-learning, will be the most important and effective with mobile access. They are revenue and growth carriers. Their implementation as mobile applications is supported by the availability of mobile platforms. Considering existing challenges in building the appropriate or suitable networks to support web applications, the most available and affordable platforms should be leveraged in such cases. An example would be the use of open source platforms Android and Java Mobile as convenient ones. In an age of rapid development, technology constantly challenges us on how to best manage innovation in order to maximize its benefits. Developing and emerging countries stand today at a vantage point helping them benefit from innovation by being able to leapfrog, acquiring and integrating decent and optimal technologies that will allow them to efficiently serve their markets. Many innovations currently taking shape are bound to make the mobile access domain a vibrant one. We are shifting towards a fully digital economy where revenue and profit for mobile will hinge on the ultimate result of the use, consumption and operation of innovative artifacts such as mobile applications, clouds, Content Delivery Networks (CDNs), digital and virtual entertainment products. These are some items that developing countries will be capable of making available, and capitalize on. Mobile entrepreneurship is a determining market boosting factor that will help better acquire and commercialize such technologies and products. The mobile internet advantage to emerging regions is deemed a crucial fulcrum, one that comes with an imperative to rethink different matters and aspects in order for technology to be best utilized. With the Internet part of the equation, it is necessary to safeguard its various core and key principles. Network Neutrality principles need to be preserved in order to avoid tier-payment models in mobile access and therefore keep growing the mobile user base. IPv6 for example that will accommodate the growing number of mobile devices and connected machines will require stewardship and management in order to be well-utilized and exploited. Traffic boosting is also to be enabled through deploying new IXPs and CDNs. Also good regulation comes as important factor that will play a big role in this context of mobile access. Governments and Mobile Operators will be able to collaborate in creating and establishing effective regulatory frameworks that will help manage critical resources such as radio spectrum, Intellectual Pr[...]



U.S. Department of Justice Demands IP Addresses, Other Details on Visitors to Trump Resistance Site

2017-08-15T13:23:00-08:00

The Los Angeles-based hosting company, DreamHost on Monday revealed that for the past several months it has been dealing with a search warrant from the Department of Justice pertaining to a website used to organize protests against President Trump. DreamHost says: "At the center of the requests is disruptj20.org, a website that organized participants of political protests against the current United States administration. While we have no insight into the affidavit for the search warrant (those records are sealed), the DOJ has recently asked DreamHost to provide all information available to us about this website, its owner, and, more importantly, its visitors. ... The request from the DOJ demands that DreamHost hand over 1.3 million visitor IP addresses — in addition to contact information, email content, and photos of thousands of people — in an effort to determine who simply visited the website."

Follow CircleID on Twitter

More under: Law, Privacy, Web




SpaceX Satellite Internet Project Status Update

2017-08-15T09:46:00-08:00

SpaceX orbital path schematic, sourceIf all goes according to plan, SpaceX will be offering global Internet connectivity by 2024. I've been following the efforts of SpaceX and OneWeb to become global Internet service providers using constellations of low-Earth orbit (LEO) satellites for some time. Launch times are getting close, so I'm posting a status update on SpaceX's project. (I'll do the same for OneWeb in a subsequent post). The Senate Committee on Commerce, Science, and Transportation held a hearing titled "Investing in America's Broadband Infrastructure: Exploring Ways to Reduce Barriers to Deployment" on May 3, 2017, and one of the expert witnesses was Patricia Cooper, SpaceX Vice President, Satellite Government Affairs. She began her oral testimony with a description of SpaceX and its capability and went on to outline the disparities in broadband availability and quality and the domestic and global broadband market opportunities. Next, she presented their two-stage plan. The first, LEO, satellite constellation [PDF] will consist of 4,425 satellites operating in 83 orbital planes at altitudes ranging from 1,110 to 1,325 km. They plan to launch a prototype satellite before the end of this year and a second one during the early months of 2018. They will start launching operational satellites in 2019 and will complete the first constellation by 2024. The LEO satellites launched in the first phase of the project will enable SpaceX to bring the Internet to all underserved and rural areas of the Earth. If all goes according to plan, SpaceX will be offering global Internet connectivity by 2024. These satellites may also have an advantage over terrestrial networks for long-range backhaul links since they will require fewer router hops, as shown in the following illustration comparing a terrestrial route (14 hops) with a satellite route (5 hops) between Los Angeles and a University in Punta Arenas, Chile (The figure is drawn to scale). Ms. Cooper also said they had filed for authority to launch a second constellation of 7,500 satellites operating closer to the Earth — in very low Earth orbit (VLEO). A 2016 patent by Mark Krebs, then at Google, now at SpaceX, describes the relationship between the two constellations. I don't have dates for the second constellation, but the satellite altitudes will range from 335.9 to 345.6 km. (The International Space Station orbits at 400 km). These satellites will be able to provide high-speed, low-latency connectivity because of their low-altitude orbits. Coverage of the two constallations will overlap, allowing for dynamic handoffs between them when desireable. When this second constellation is complete, SpaceX might be able to compete with terrestrial networks in densely populated urban areas. These VLEO satellites might also be used for Earth imaging and sensing applications and a bullish article by Gavin Sheriden suggests they may also connect all Tesla cars and Tesla solar roofs. Very low Earth orbit (VLEO) satellites have smaller footprints, but are faster and have lower latency times than higher altitude satellites. Image Source Ms. Cooper concluded her testimony with a discussion of administrative barriers they were encountering and listed six specific policy recommendation. You can see her full written testimony here. The entire hearing is shown below, and Ms. Cooper's testimony begins at 13:54. width="644" height="362" src="https://www.youtube.com/embed/bYw8WZnqFyE?rel=0&showinfo=0" frameborder="0" allowfullscreen> I will follow this post with a similar update on OneWeb, SpaceX's formidable competitor in the race to become a global Internet service provider using satellites. Global connectivity is a rosy prospect, but we must ask one more question. Success by either or both of these companies could, like the shift from dial-up to broadband, disrupt the Internet service industry. As of July/August 1997, there were 4,009 ISPs in North America, and today few [...]



Aviation: The Dirty, Not-So-Little Secret of Internet Governance

2017-08-15T07:49:00-08:00

This article aims to provide an overview of carbon offsetting, a guide to investing in carbon offsetting programs, and concludes with a call to action by the Internet governance community to research and ultimately invest in suitable carbon offsetting programs. Almost a year ago, I began writing about the relationship between the Internet/information and communications technologies (ICTs), the environment, and sustainability. One of the points I made in my first article on the subject is that there is much more we as a community can do to reduce our ecological footprint and enhance the sustainability of the Internet — which happens to be good for both the planet and business. This necessity combined with the ever-growing urgency to act hit hard when I recently read a New York Times article about how bad flying is for the environment. If anyone is reading this in the northern hemisphere at the moment (or anywhere, really), I don't need to remind you about how hot it is outside. The fact is, the earth is getting warmer, and anecdotes only serve as a reminder — for instance, it's been so hot in parts of the Middle East this summer that palm trees have been spontaneously combusting, and it was so hot in Phoenix, Arizona, in June that some airplanes couldn't even takeoff. What's the connection, though, between our warming planet and Internet governance? Beyond the façade of the seemingly glamorous lifestyle of the Internet governance community, marked by cocktail dinners, international travel, and exotic locales, lies an uncomfortable truth: we are high carbon emitters. Let's face it: if anyone is working on Internet infrastructure or Internet governance, traveling is practically a requirement for the job and a veritable necessity for the Internet governance community. As I wrote previously on CircleID: "[Factor in] the uncomfortable reality that to effectively govern a critical global resource means heavy reliance on air travel, it places a lot of existential pressure on the Internet governance community and policy-makers, while also providing even more impetus to produce effective, sustainable, and impactful outcomes at global meetings." So, even if it is unsurprising news, you can nevertheless imagine my ethical dilemma as I continue to advocate for sustainability but take more flights in a year than many people will take their whole lifetime. While the guilt and the accompanying affliction of hypocrisy are pernicious, I am working to reduce my carbon footprint in other ways and trying travel less. Yet, even though we can — and should — make changes to our lifestyles, such as eating less meat, recycling more, and cutting down on emissions wherever possible, personal responsibility can only go so far — even renewables aren't necessarily a panacea. The real cost of flying According to the Australian consumer advocacy group CHOICE, airlines emitted 781 million tons of carbon dioxide (CO2) into the atmosphere in 2015, representing about 2% of human-caused CO2 emissions. While this pales in comparison to other industries like energy production, manufacturing, or agriculture in terms of total global emissions, "If global aviation was a country," Alison Potter writing for CHOICE stressed, "Its CO2 emissions would be ranked seventh in the world, between Germany and South Korea. And as flying becomes cheaper and more popular, the problem is heading skyward. Global passenger air traffic grew by 5.2% from June 2015 to June 2016 and emissions are growing at around 2-3% a year." Moreover, let's say someone flew round-trip in economy class directly from New York City to the Internet Corporation for Assigned Names and Numbers' (ICANN) headquarters in Marina del Rey, California, a municipality in the Los Angeles area, they would emit anywhere between 1.09 metric tons of CO2 to 1.78 metric tons (the discrepancy depends on the methodology used by different carbon calculators, common ones bein[...]



Should the EB-5 Investor Visa Program Recognize Cyber Workers?

2017-08-12T11:17:00-08:00

The EB-5 Investor Visa Program was created by Congress in 1990 to "stimulate the U.S. economy through job creation and capital investment by foreign investors." The program, administered by the Department of Homeland Security's U.S. Citizenship and Immigration Services (USCIS), provides that "entrepreneurs (and their spouses and unmarried children under 21) are eligible to apply for a green card (permanent residence) if they: Make the necessary investment in a commercial enterprise in the United States; and Plan to create or preserve 10 permanent full-time jobs for qualified U.S. workers." The EB-5 program encourages foreign entrepreneurs to invest in a Targeted Employment Area (TEA). A TEA is defined as a rural area or an area where the unemployment rate is at least 150% of the national average. The EB-5 program has delegated to states the authority to designate various TEAs on a project-specific basis. By locating a commercial enterprise in a state-designated TEA, foreign investors sharply reduce the size of investment that is needed to qualify for a green card. The EB-5 regulations, which were written in 1990, take a geocentric approach to defining TEAs by assuming that an enterprise's employees live near its principle place of business. In 1990, this was not an unreasonable assumption. It is today. It is now common for American workers to physically commute to jobs that are located in different metropolitan areas and to cyber-commute to jobs anywhere in the country. Internet-based employment is an efficient means of providing economic opportunities to workers who live in rural America and areas of high unemployment. The current TEA designation process has been the subject of criticism over concerns that the EB-5 investments are not helping the program's intended beneficiaries. In response to criticism and the passage of time, DHS is updating its EB-5 regulations. DHS's Notice of Proposed Rulemaking explains that the program's reliance "on states' TEA designations has resulted in the application of inconsistent rules by different states. ... the deference to state determinations provided by current regulations has resulted in the acceptance of some TEAs that consist of areas of relative economic prosperity linked to areas with lower employment, and some TEAs that have been criticized as 'gerrymandered.'" DHS's response to this concern is a proposal to (1) centralize TEA decisions in Washington and (2) create an even more georestrictive requirement for TEAs. The proposed rule does not consider Americans who could "commute" to work via the internet. In short, the proposed regulation doubles-down on the 1990 mindset that workers live near their place of employment, a faulty assumption that has helped fuel the "gerrymandering" issue. There is no statutory requirement that new commercial enterprises be physically located in TEAs in order for investors to qualify for the TEA provisions of the EB-5 program. To the contrary, the statute states that investor visas "shall be reserved for qualified immigrants who invest in a new commercial enterprise ... which will create employment in a targeted employment area." One option for the EB-5 program would be to allow a new commercial enterprise to qualify for TEA EB-5 investment, irrespective of the business's location, by committing to hire workers who live in a designated TEA. By leveraging the internet, the EB-5 program could provide technology jobs to Americans who live in rural and high unemployment areas. Written by Bruce Levinson, SVP, Regulatory Intervention - Center for Regulatory EffectivenessFollow CircleID on TwitterMore under: Law, Policy & Regulation [...]



Supporting New DNS RR Types with dnsextlang, Part II

2017-08-11T07:54:00-08:00

Previous article introduced my DNS extension language, intended to make it easier to add new DNS record types to DNS software. It described a new perl module Net::DNS::Extlang that uses the extension language to automatically create perl code to handle new RRTYPEs. Today we look at my second project, intended to let people create DNS records and zone files with new RRTYPEs. I've long had a DNS "toaster", a web site where my users and I could manage our DNS zones. Rather than limiting users to a small list of RRTYPEs, it lets users edit the text of their zone files, which works fine except when it doesn't. Every hour a daemon takes any changed zonefiles, signs them, and passes them to the DNS servers. With no syntax check in the toaster, if there's a syntax error in a zone file, the entire rebuild process fails until someone (usually me) notices and fixes it. Since the toaster is written in python, I wrote a python library that uses the same extension language to do syntax checked zone edits and a simple version of the toaster as a django app that people can start with. The syntax checker does two things: one is to read text strings that are supposed to be DNS master files, or single master records and check whether they're valid. The other is to create and parse HTML forms for DNS records to help people enter valid ones. To show how this works, I put a series of screen shots in this PDF so you can follow along. The first screen shows the site after you log in, with a few existing random domains. If you the Create tab, you get the second screen, which lets you fill in the domain name and (if you're a site admin) the name of the user who owns the site. Click Submit, and now you're on the edit page, where you can see the zone has been created with a single comment record, just so it wouldn't be empty. There's a New Record: section where you can choose the record type you want to create, and click Add. The set of record types is created on the fly from the extension language database in the DNS that I described in the last blog post, so you can create and later edit any RRTYPE that the extension language can describe. We choose MX and click Add, which gives us a screen with a form that has all of the fields in the MX record. This form is also created on the fly by the extension language library, so for each rrtype, it will show an appropriate form with prompts for each field. Fill in the form and click Submit, and the record is added to the zone file if it's valid. The next screen shows what happens if you get the syntax wrong, in this case, an A record with an invalid IPv4 address. The extension library has a class for every field type that produces helpful error messages in case of syntax errors. Since sometimes it's tedious to edit a record at a time, there's also a Block edit mode, shown in the next screen, where you can edit the zone as a block of text. When you submit the changes, it syntax checks the zone. The next screen shows an error message for an AAAA record with an invalid IPv6 address. Not shown are some other odds and ends, notably a batch script that exports a list of zone names and a set of zone files that you can give you your DNS server. The django app is only about 1000 lines of python, of which about 1/3 is managing the various web pages, 1/3 is connecting the extlang library to the forms generated by django's forms class, and 1/3 is everything else. The python library is in pypi at https://pypi.python.org/pypi/dnsextlang/, currently python3 only. The django app is on github at https://github.com/jrlevine/editdns, written in django 1.9 and python3. It uses the dnsextlang library, of course. Written by John Levine, Author, Consultant & SpeakerFollow CircleID on TwitterMore under: DNS [...]



Is a New Set of Governance Mechanism Necessary for the New gTLDs?

2017-08-10T20:53:00-08:00

In order to be able to reply to the question of whether a new set of governance mechanisms are necessary to regulate the new Global Top Level Domains (gTLDs), one should first consider how efficiently the current Uniform Domain-Name Dispute-Resolution Policy (UDRP) from the Internet Corporation for Assigned Names and Numbers (ICANN) has performed and then move to the evaluation of the Implementations Recommendations Team (ITR) recommendations. In September 2008, an analysis of the opportunities and problems for trademark owners presented by the introduction of new gTLDs [1] was published in the Trademark World magazine. That analysis identified several brand protection challenges such as the absence of required pre-launch rights protection mechanisms (RPMs), the problems of defensive registrations, and the unprecedented potential for cybersquatting [2]. According to Kristina Rosette [3] an Intellectual Property Constituency Representative to the ICANN's Generic Name Supporting Organization (GNSO) Council and ex-member of the Implementation Recommendation Team, ICANN has made little advancement on the issue of trademark protection in the new gTLDs despite the efforts of numerous trademark owners, associations, and lawyers. Issues with the UDRP In February 2010, the ICANN GNSO council passed a resolution [4], requesting ICANN staff to draft an Issues Report [5] on the current state of the UDRP. According to that motion, the draft had to focus mainly on issues of: — How insufficiently and unequally has the UDRP addressed the problems of cybersquatting; — Whether the definition of the term 'cybersquatting' needed to be reviewed or updated in the existing UDRP language including a possible revision of the policy development process. In his book [6] 'The Current State of Domain Name Regulation: domain names as second-class citizens in a mark-dominated world', Dr. Komaitis has interestingly outlined some of the major issues related to the UDRP, which have commonly contributed to its procedural unfairness. Some of those issues [7] can be broken down to: The panellists associated with the UDRP have mainly a trademark law background which is not sufficiently oriented to the multi-stakeholder approach; The UDRP makes arbitrary use of precedent. One unique feature of the emerging arbitration process under the UDRP has been the development of its own jurisprudence. While most arbitration is done with little, if any, public disclosure, the publication of UDRP opinions on the Web, has led to a practice of citing back to previous panel decisions. Some decisions have used the previous cases with only the weight of persuasive authority, while others appear to view themselves as being bound by precedent. In several cases, panels have used opinions from previous cases as persuasive authority to help address a variety of procedural and substantive matters. For example, J.P. Morgan v. Resource Marketing (jpmorgan.org) D2000-0035 [8] was a dispute involving an American Complainant and an American Respondent. The Respondent's reply was late and the Complainant argued for the inadmissibility of the late response (the Complainant cited Talk City, Inc. v. Robertson (talk-city.com) D2000-0009 [9] as precedent for this position). The UDRP is based upon the assumption that all domain name registrations are potentially abusive and harmful without any distinction or assessment between actual harm and the likelihood of such harm. In practice, this is not always the case. There is no authority responsible for the validation of the decisions that emerge from the UDRP panels. The bad faith element is open to wide and discretionary, if not discriminatory, interpretations. Trademark attorneys were initially concerned by the UDRP's badfaith use requirement because under US trademark law, "use" meant that the domain name had to be "used in commerce"[...]



Where to Search UDRP Decisions

2017-08-10T13:25:00-08:00

Searching decisions under the Uniform Domain Name Dispute Resolution Policy (UDRP) is important — for evaluating the merits of a potential case and also, of course, for citing precedent when drafting documents (such as a complaint and a response) in an actual case. But, searching UDRP decisions is not always an easy task. It's important to know both where to search and how to search. Unfortunately, there is no longer an official, central repository of all UDRP decisions that is freely available online. Instead, each of the UDRP service providers publishes its own search page, at the links below: World Intellectual Property Organization (WIPO) The Forum (formerly known as the National Arbitration Forum, or NAF)Asian Domain Name Dispute Resolution Centre (ADNDRC)Czech Arbitration Court (CAC) (The newest UDRP service provider, the Arab Center for Dispute Resolution, has had only two cases as of this writing and does not have — or, therefore, need — a search tool.) Each of these providers offers a different search engine, some of which are better than others. For example, three of the search pages (WIPO, the Forum and CAC) offer field-based searches with the ability to find decisions based on criteria such as the disputed domain name, the complainant or the respondent; ADNDRC provides only a general search field. There are other differences, too. For example, WIPO and the Forum are the only providers that also provide an index-based search. WIPO and the Forum offer the ability to limit searches to specific domain name dispute policies (other than the UDRP), but none of the providers lets users search by all relevant criteria, and only the Forum allows searches by specific top-level domains within the UDRP. Google and Other Services For advanced searches, it's sometimes helpful to conduct a Google search instead of a UDRP-specific tool, limiting results to the relevant UDRP service provider's domain. For example, adding "site:wipo.int" (without the quotes) to the beginning of a general Google search will produce results only from the WIPO website. Because this means that the results may contain pages other than UDRP decisions, I often add another phrase to my search that I know will produce a UDRP result (such as: "The Complaint was filed with the WIPO Arbitration and Mediation Center"). Yes, it's awkward, but it works pretty well. There are some third-party websites that offer UDRP search tools, such as UDRP Search, DNDisputes and DomainFight.net. But, like the official UDRP service providers' engines, these, too, have limitations. While DNDisputes offers more search fields, decisions are limited to those from WIPO; and DomainFight.net's are limited to WIPO and the Forum. UDRP Search's options are not very robust. Bottom Line: Use Them All Ultimately, using some combination of all of the above tools and techniques is often the best practice. Doing so will enable you to search the widest number of decisions in the most advanced way possible. After more than 17 years and 60,000 decisions, UDRP jurisprudence is obviously very robust. Unfortunately, finding the most important and relevant decisions requires mastery of both the art and science of search. Written by Doug Isenberg, Attorney & Founder of The GigaLaw FirmFollow CircleID on TwitterMore under: Domain Names, Law, UDRP [...]



Broadband Providers: What are the Implications of Virtual Reality?

2017-08-09T18:32:00-08:00

Broadband service providers take note, personal virtual reality (VR) platforms are going to reshape the industry sooner than you think. We've seen a constant stream of VR-related news coming from major industry tradeshows, online broadband publications, and even broadband CEO blog posts. I'll try to generalize their comments succinctly: personal VR platforms are expected to bring massive sales, huge increases in bandwidth consumption, and dramatic shifts in subscriber quality expectations. This is an exciting time for broadband service providers, but it's essential to consider what implications VR will have on your organization. Here are some key questions to answer if you want to stay ahead of the VR trends: Which Access Network Delivers the Bandwidth Required for VR? Bandwidth usage is about to go way, way up. Major League Baseball has recently teamed up with Intel to announce a new project, where they will deliver one live-streamed game per week in virtual reality. This represents a new age for sports enthusiasts, who will be able to tune in from the comfort of their homes to watch live, 360 degree footage of a baseball game as if they were in the stadium. The bandwidth required to not only deliver this footage, but to maintain high quality throughout, will be unprecedented. Forbes Magazine recently broke down what gigabits per second it would take to generate a digital experience at the full fidelity of human perception, predicting that humans can process an equivalent of nearly 5.2Gbps of sound and light — more than 200x what the FCC predicted to be the future requirement for broadband networks (25 Mbps). Operators who want to stay ahead of the curve have to make important decisions about what access network will best fit their subscribers' needs. Fiber, DOCSIS 3.1 (Full Duplex DOCSIS 3.1), and converged approaches all have their benefits. How Much Network and Subscriber Visibility is Required to Optimize Services? The simple answer: A lot. Providers looking to satisfy the needs of their subscribers as next-generation content delivery platforms like VR enter the mainstream need a holistic view of the network, but they also need holistic vision beyond the network edge into the subscriber premises. This can be achieved with a combination of TR-069 standards and Internet Protocol Detail Record (IPDR) data, which allows operators to monitor the access network as well as customer edge equipment. By gaining a picture of the entire services network (including beyond the last mile), you can ensure service quality issues are minimized while also proactively resolving network and customer equipment issues, many times before the subscriber is even affected. This also leads to better network intelligence, meaning faster issue resolution when subscribers have to phone the customer call center. Increasing visibility over the usage habits of your subscribers also enables you to make better predictions when planning for future capacity requirements. As an added bonus, you will open up new avenues to optimize and personalize the user experience like never before, which brings me to the last question. Will Traditional Service Models Still Meet Subscriber Needs? Service usage habits are changing more rapidly than ever, and customer preferences are becoming more unique. The era of simple tiered service plans is reaching an impasse. Broadband providers must look for ways to implement strategic service plans that can deliver the right services at high quality to the subscribers that need them, while ensuring subscribers who require less bandwidth aren't affected with network congestion and buffering. Diversity among service plans is essential. Early adopters of VR must have the bandwidth available that they need to stream live events in high quality, or play videogames[...]



Supporting New DNS RR Types with dnsextlang, Part I

2017-08-09T11:12:00-08:00

The Domain Name System has always been intended to be extensible. The original spec in the 1980s had about a dozen resource record types (RRTYPEs), and since then people have invented many more so now there are about 65 different RRTYPEs. But if you look at most DNS zones, you'll only see a handful of types, NS, A, AAAA, MX, TXT, and maybe SRV. Why? A lot of the other types are arcane or obsolete, but there are plenty that are useful. Moreover, new designs like DKIM, DMARC, and notoriously SPF have reused TXT records rather than defining new types of their own. Why? It's the provisioning crudware. While DNS server software is regularly updated to handle new RRTYPEs, the web based packages that most people have to use to manage their DNS is almost never updated, and usually, handles only a small set of RRTYPEs. This struck me as unfortunate, so I defined a DNS extension language that provisioning systems can use to look up the syntax of new RRTYPEs, so when a new type is created, only the syntax tables have to be updated, not the software. Paul Vixie had the clever idea to store the tables in the DNS itself (in TXT records of course), so after a one-time upgrade to your configuration software, new RRTYPEs work automagically when their description is added to the DNS. The Internet draft that describes this has been kicking around for six years, but with support from ICANN (thanks!) I wrote some libraries and a sample application that implements it. Adding new RRTYPEs is relatively straightforward because the syntax is quite simple. Each record starts with an optional name (the default being the same as the previous record) optional class and time to live, the mnemonic for the record type such as A or MX or NAPTR, and then a sequence of fields, each of which is a possibly quoted string of characters. Different RRTYPEs interpret the fields differently, but it turns out that a fairly small set of fields types suffice for most RRTYPEs. Here's a typical rrype description, for a SRV record. In each line, the stuff after the space is descriptive text. SRV:33:I Server Selection   I2:priority Priority   I2:weight Weight   I2:port Port   N:target Target host name The first line says the mnemonic is SRV, the type number is 33, it's only defined in the IN class (the "I".) There are three two-byte integer fields, priority, weight, and port, and a DNS name target. The first word on each field line is the field name, the rest of the line is a comment for humans. When stored in the DNS, each of those lines is a string in DNS TXT records, like this: SRV.RRNAME.ARPA. IN TXT ("SRV:33:I Server Selection" "I2:priority Priority"   "I2:weight Weight" "I2:port Port" "N:target Target host name") 33.RRTYPE.ARPA. IN TXT ("SRV:33:I Server Selection" "I2:priority Priority"   "I2:weight Weight" "I2:port Port" "N:target Target host name") In the DNS, there are two copies, one at the text name of the RRTYPE, and one at its numeric code. (Until the records are there, the software packages let you change the location. I've put descriptions at name.RRNAME.SERVICES.NET and number.RRNAME.SERVICES.NET.) See the Internet Draft for the full set of field types and syntax details. The first software package I wrote is an extension to the popular perl Net::DNS module called Net::DNS::Extlang. With the extension, if Net::DNS sees a text master record with an unknown RRTYPE name, or a binary record with an unknown RRTYPE number, it tries to look up the record description in the DNS, and if successful, passes the description to Net::DNS::Extlang which compiles it into a perl routine to encode and decode the RRTYPE which Net::DNS installs. The authors of Net::DNS worked with me so recent versions of Net::DNS have the [...]



Wireless Innovations for a Networked Society

2017-08-09T08:56:00-08:00

Last week, I had the honor of moderating an engaging panel discussion at Mozilla on the need for community networks and the important role they can play in bridging the digital divide. The panel highlighted the success stories of some of the existing California-based initiatives that are actively working toward building solutions to get under-served communities online. But why do we need community networks when nation-wide service providers already exist? According to Recode and the Wireless Broadband Alliance, 62 million Americans in urban centers and 16 million Americans in rural locations either don't have access to or can't afford broadband Internet. Locally, even in tech-driven San Francisco, more than 100,000 residents still do not have access to the Internet at home. A potential solution to help close this gap is to develop community-based connectivity projects, centered around the needs of the local individuals. Empowering New Voices The goals of the Mozilla challenge are simple: how do you connect the unconnected and how do you connect people to essential resources when disaster strikes? The technical challenge can be approached in many different ways, but the crux of the problem lies in understanding and meaningfully addressing specific community needs. Ideally, by empowering individuals at the grassroots level, the dream is also to cultivate new voices that shape the future of the web and members who partake in the digital economy, reaping the benefits of a connected society. Championing Connectivity A key take-away from the event was the recurring challenge most of these organizations have faced at some point — how to build a project that is sustainable in the long term? Often, despite projects being adequately funded and having a clear technical plan of action, lack of local leadership and community engagement to carry these projects forward can result in an abrupt end once the initial deployment is completed. Local champions for connectivity, people who understand that digital dividends can change lives and people who know the value of a connected society, are critical in developing a solution that serves the community and builds an empowered Internet citizenry. As speaker Steve Huter recounted from his many years of experience at the Network Startup Resource Center, "Request driven models tend to evolve better and have more engagement from developers and beneficiaries working together. Often as technologists, we get excited about a particular technology and we have the solution in mind; but it is really important to make sure that we are addressing the needs of the specific community and solving the right model." Breaking Barriers Important work driving this mission is already underway in the San Francisco Bay Area. On the Mozilla panel, experts from three initiatives furthering this cause discussed their work, but more importantly provided guidance to the attendees on how to approach these Challenges. Speakers represented on the panel included: Marc Juul from People's Open – a community owned and operated peer-to-peer wireless network in Oakland Thu Nguyen from Flowzo – a start-up trying to fix the issue of last mile connectivity through multi-stakeholder cooperation Steve Huter from Network Startup Resource Center – a 25 year old organization that has helped build Internet infrastructure in 100 countries around the world Fatema Kothari from the San Francisco Bay Area Internet Society A complete video recording of this event can be found here. * * * Mozilla and the National Science Foundation are partnering to give away $2M in prizes for wireless solutions that help connect the unconnected, keep communities connected following major disasters, and which make use o[...]



Cuban Professors Get Laptops But No Wifi Capabilities

2017-08-08T20:46:00-08:00

Late last year, we learned that China's 90,000 employee Haier Group would be producing laptops and tablets in partnership with GEDEME, a Cuban manufacturer that will assemble the machines using Haier parts, equipment, and production processes. Last week, a friend who is a professor at the University of Havana told me that he and other professors have been given GDM laptops. He said UCI, ISPJAE and Univerisity of Havana faculty were the first to receive the laptops, but eventually all professors at all universities would get them. When Haier announced they would be producing laptops in Cuba, they said would be Core i3, Celeron and Core i5 CPUs with up to 1 TB of memory. The processor in my friend's machine is a 1.60GHz Celeron N3060, which Intel announced April 1, 2015. The N3060 is a system on a chip with two processor cores, a graphic processing unit, and a memory controller. His laptop has 4 GB of RAM, a 97.31 GB hard drive, a CD-ROM drive and a 1,024 x 768 pixel display with 32-bit color depth. It has a wired Ethernet port, but no WiFi or Bluetooth. The machine came with UCI's Nova Unix operating system, but my friend has installed Windows in its place and he says most people do the same. (Cuban officials say they can achieve software independence using Nova, but Cuba is not large enough to support its own software, services, and standards). These are low-end laptops, but they represent a significant step up over phones and tablets for content creation. They are also power-efficient, making them suitable for portable use, but for some reason, they do not have WiFi radios. A laptop without WiFi is striking today. I don't know what the marginal cost of WiFi would have been, but Alibaba offers many chips for under $5 in relatively small lots. Why don't these machines have WiFi radios? Is the government trying to discourage portable use at home or public-access hotspots? Regardless of the reason, WiFi dongles are a low-cost fix. There are not a lot of WiFi dongles for sale on Revolico today and their prices are high, but I bet the offerings pick up if these laptops roll out. Written by Larry Press, Professor of Information Systems at California State UniversityFollow CircleID on TwitterMore under: Access Providers [...]



British Organizations Could Face Massive Fines for Cybersecurity Failures

2017-08-08T06:30:00-08:00

Organizations who fail to implement effective cybersecurity measures could be fined as much as £17 million or 4% of global turnover, as part of Britain's plan to prevent cyberattacks that could result in major disruption to services such as transport, health or electricity networks. The Guardian reports: "The move comes after the [National Health Service] NHS became the highest-profile victim of a global ransomware attack, which resulted in operations being cancelled, ambulances being diverted and patient records being made unavailable. ... The issue came to the fore again after a major IT failure at British Airways left 75,000 passengers stranded and cost the airline £80m… The consultation will also focus on system failures, with requirements for companies to show what action they are taking to reduce the risks."

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, Policy & Regulation




China Carries Out Drill with ISPs to Practice Taking Down Websites Deemed Harmful

2017-08-04T11:40:00-08:00

China carried out a drill on Thursday to practice shutting down websites that are deemed harmful amidst country's preparation for a sensitive political reshuffling set to take place later this year. Sijia Jiang reporting in Reuters: "Internet data centers (IDC) and cloud companies ... were ordered to participate in a three-hour drill to hone their 'emergency response' skills, according to at least four participants that included the operator of Microsoft's cloud service in China. ... The drill asked internet data centers to practice shutting down target web pages speedily and report relevant details to the police, including the affected websites' contact details, IP address and server location."

Follow CircleID on Twitter

More under: Access Providers, Censorship, Internet Governance




British Security Researcher Credited for Stopping WannaCry Is Charged in a U.S. Cybercrime Case

2017-08-04T11:22:00-08:00

Cybersecurity researcher to appear in court in Las Vegas charged in a US cybercrime case. The 23-year-old British security researcher, Marcus Hutchins, who a few months ago was credited with stopping the WannaCry outbreak by discovering a hidden "kill switch" for the malware, is now reported to have been arrested by the FBI over his alleged involvement in separate malicious software targeting bank accounts. The Guardian reports: "According to an indictment released by the US Department of Justice on Thursday, Hutchins is accused of having helped to create, spread and maintain the banking trojan Kronos between 2014 and 2015. The Kronos malware was spread through emails with malicious attachments such as compromised Microsoft Word documents, and hijacked credentials such as internet banking passwords to let its user steal money with ease."

The Kronos indictment: Is it a crime to create and sell malware? Orin Kerr from the Washington Post writes: "The indictment asserts that Hutchins created the malware and an unnamed co-conspirator took the lead in selling it. The indictment charges a slew of different crimes… Do the charges hold up? Just based on a first look at the case, my sense is that the government’s theory of the case is fairly aggressive. It will lead to some significant legal challenges."

Follow CircleID on Twitter

More under: Cybercrime, Cybersecurity, Malware




Renewed Internet.nl Website: Modern Standards Need to be Used for a Free, Open and Secure Internet

2017-08-04T08:49:00-08:00

Modern Internet Standards provide for more reliability and further growth of the Internet. But are you using them? You can test this on the Dutch website Internet.nl (also available in English and Polish). Recently the website was renewed. Not only the style has been adapted, but also the way the tests are performed and the test results are shown. A lot of additional information has been added, so that even the tech savvy internet users can find an explanation underpinning the test results. The website, an initiative of the Dutch internet community and the Dutch Government, is used to promote standards that will enable us to make the best possible use of the internet as we know it. To beat internet crime and to improve our interconnectivity, we strongly believe in applying these modern internet standards. These will safeguard our websites, our email communications and our privacy — something all of us should care about. We are very happy to see a growing number of users, that test local connections, domains as well as email settings. The tests provided at Internet.nl are quite fast and based on international collaboration efforts within the internet community. We think this is the only reasonable way forward in order to keep the internet as a source of connecting people, sharing information and open access to a wide range of resources. We constantly aim at improving both our tests and our advise to the users of our website. This is only possible thanks to the continuing support of the members of the Dutch Internet Standards Platform. But also your use of Internet.nl and all your questions and comments help us to better understand how these modern standards can be used in the best way. As a spin-off of our efforts, earlier this year the Dutch Secure Email Coalition was established: an initiative that aims to focus on improving security in our daily use of email. Besides members of the Dutch Internet Standards Platform this coalition has new members from the Government and Industry sector that closely work together in sharing knowledge and experience regarding the implementation of modern standards like DMARC/DKIM, SPF and DNSSEC. Our first meetings have been very informative and productive and I look forward to see a growing number of organizations implementing these standards. I hope that still more users will find Internet.nl not only useful, but inspiring as well. The Hall of Fame shows that a growing number of organizations and individuals are able to reach a 100% score. I think that this is really promising and hope that all of you will help us to keep the internet, open, free and secure! Written by Gerben Klein Baltink, Chairman of the Dutch Internet Standards PlatformFollow CircleID on TwitterMore under: Access Providers, Cybersecurity, Policy & Regulation, Web [...]



When a Domain Name Dispute is 'Plan B'

2017-08-04T08:01:00-08:00

"Plan B" (noun): an alternative plan of action for use if the original plan should fail. While having a backup plan is usually a good idea, it's often not an effective way to obtain someone else's domain name — at least not when Plan B consists of a company filing a UDRP complaint with the hope of getting a domain name to which it is not entitled and could not acquire via a negotiated purchase. "Plan B" as a derogatory way of describing an attempted domain name acquisition usually arises in the context of a domain name that is not protected by exclusive (or any) trademark rights, or where the complainant clearly could not prevail in a UDRP proceeding. Such as (to name a few real examples): , and . A Short History of 'Plan B' The label appears to have first been used in the context of domain name disputes in two decisions from 2007, one involving , the other and . In the case, the panel described Plan B like this: "Complainant commenced this proceeding as an alternative ('Plan B') to acquire the disputed domain name after being rebuffed in the anonymous auction process." Through the years, the "Plan B" terminology has been invoked in about 50 cases, nearly all of which resulted in decisions allowing the current registrants to keep the disputed domain names. The panel in the proceeding called it "a classic 'Plan B' case where a party, having been frustrated in its negotiations to buy a domain name, resorts to the ultimate option of a highly contrived and artificial claim not supported by any evidence or the plain wording of the UDRP." In that case, the facts at first glance might seem appropriate for a UDRP case: The complainant had trademark rights in QUEEN that pre-dated the 1997 registration of the domain name, and the domain name was used in connection with a pornographic website. When the complainant contacted the registrant about buying the domain name, the registrant quoted a purchase price of $2 million or a lease at $15,000 per month. But, the panel focused on the "dictionary meaning" of the word "queen" and the fact that the domain name was not directed at the complainant, which is active in the flower-growing industry: [T]he Disputed Domain Name consists of a common term and the Respondent has used the Disputed Domain Name in a way which corresponds to one of the common meanings of that term. The Complainant has failed to give the Panel any reason to think that the Respondent registered the Disputed Domain Name to capitalize on the alleged fame of the Complainant's trademarks in any way, rather than in connection with one common meaning of the Disputed Domain Name. The fact that the Disputed Domain Name redirects to adult material does not alter this finding. Where a domain name registrant tries to obtain financial gain by registering and using a non-generic domain name in which it has no rights or legitimate interests, the offering of adult content may be evidence of bad faith use.... However, as the Disputed Domain Name has a dictionary meaning, those cases do not apply. In other words, thwarted in its effort to purchase the domain name, the complainant resorted to Plan B, filing a UDRP complaint. Obviously, that didn't work, and the complainant lost the UDRP case. Reverse Domain Name Hijacking In addition to simply losing a UDRP decision, a complainant that pursues a Plan B domain name dispute could see its plan backfire, if the UDRP panel finds that the complainant t[...]



Unlocking the Hidden Value Within Your IT Organization

2017-08-03T07:58:00-08:00

Many C-level executives are unaware their IT organizations could be sitting on a lucrative sellable supply of unused IPv4 addresses. Assessing and executing on the opportunity takes planning, but there's a clear path for getting it done. In 2014 and 2015, buyers had their pick of large block holders with millions of available and unused numbers. This surplus allowed large buyers to shop around for the lowest offer and, as a result, drive prices down to a low of $4/number. The combination of low unit prices and large quantities of easily accessible address space stimulated a buying spree with over 50 million numbers transferred globally in 2015. With the "low hanging fruit" sold off in 2015 and effective depletion of ARIN's, RIPE's and APNIC's IPv4 free pools, the supply of IPv4 numbers in North America, Europe and APAC diminished significantly. Demand, however, did not. This drove prices up considerably in 2016. Prices are expected to continue their upward trend. C-level executives are doing all they can to control costs, increase margins and drive revenue growth. But many of them are unaware that their IT organizations could be holding onto lucrative sellable assets. Address space flowing into the trading market often originates from companies that were given large quantities of "legacy" IPv4 numbers back in the 1980s and 90s, but no longer need so much address space. By selling their unused numbers, savvy address holders add directly (and often significantly) to their bottom lines while also helping the Internet community bridge the gap between IPv4 free pool depletion and full IPv6 migration. Market conditions are now producing considerable upward pressures on both demand and prices. This will increase sell-side opportunities and returns over the next two to three years, which will make it easier for executives to justify investing in renumbering projects to optimize their address space utilization, and free up IPv4 blocks for sale. Assessing the opportunities and executing IPv4 sale strategies takes some planning. Below is a map to getting it done. • Figure out what you have. Getting a good handle on a company's IPv4 inventory is the executive's first, most important step in understanding the value of the potential opportunity. The ARIN whois registry is a good place to start but may not tell the full story, particularly if the company has been involved in mergers or acquisitions. • Figure out what you can sell. If a block is registered in the company's name, unadvertised and otherwise unused internally, then it may be a good candidate for immediate sale subject to validating that the numbers have not been hijacked by a third party and do not have negative reputations or blacklist scores. But things are often more complex, and require prospective sellers to perform some pre-work to clean up the registration records and renumber their space for greater usage efficiency and aggregation. The good news is that the returns on a sale typically dwarf the costs incurred. • Bring your in-house counsel on board. Five years after the market's public emergence with the Microsoft-Nortel sale, some still question the legality of buying and selling numbers, and regard the market with suspicion. Their concerns are unfounded. Internet governance institutions fully recognize and openly promote the marketplace as the only meaningful resource for their members to obtain IPv4 resources. And federal courts, typically in bankruptcy proceedings, have approved the conveyance of IPv4 numbers as alienable assets. • Get the technical t[...]



Verizon, AT&T Speeds Slow After Unlimited Data Plans Launch

2017-08-02T17:25:00-08:00

Verizon and AT&T re-introduced their unlimited data plans earlier this year, and as a result, studies show that the providers' 4G speeds and overall speeds have fallen due to increased data demand on their networks. Analyzing more than 5 billion measurements, OpenSignal compared the 3G and 4G performance of the big 4 mobile operators in the U.S. From the report: "It's been a fascinating six months for the U.S. mobile industry. After years of retreating from all-you-can-eat data services, both Verizon and AT&T reintroduced unlimited plans this year to counter the increasing threat of T-Mobile and Sprint. Those new plans not only had a big impact on the competitive landscape in the U.S. but also on OpenSignal's metrics. Our measured average speeds on Verizon and AT&T's networks have clearly dropped, almost certainly a result of new unlimited customers ramping up their data usage. Conversely, T-Mobile and Sprint's 4G and overall speeds are steadily increasing in our measurements. Those shifting speed results were one of the main reasons T-Mobile swept our six awards categories for this reporting period. Despite T-Mobile's wins, the Un-carrier and Verizon are still engaged in a very close fight in our 4G metrics in the urban battlegrounds of the U.S."

Follow CircleID on Twitter

More under: Access Providers, Mobile Internet, Telecom