Subscribe: Storageioblog summary RSS feed (
Preview: Storageioblog summary RSS feed (

StorageIO and StorageIOblog full RSS feed (

Gregs Server and StorageIO blog: IT information and data infrastructure cloud virtualization software defined storage I/O

Published: Thu, 7 December 2017 23:12:09GMT

Last Build Date: Thu, 7 December 2017 23:12:09GMT

Copyright: (C) Copyright 2006-2017 All rights reserved. Server StorageIO and UnlimitedIO LLC (

IT transformation Serverless Life Beyond DevOps with New York Times CTO Nick Rockwell Podcast

Thu, 30 November 2017 19:18:17GMT

By Greg Schulz - November 30, 2017 In this Server StorageIO podcast episode New York Times CTO / CIO Nick Rockwell (@nicksrockwell) joins me for a conversation discussing Digital, Business and IT transformation, Serverless Life Beyond DevOps and related topics. In our conversation we discuss challenges with metrics, understanding value vs. cost particular for software, Nicks perspective as both a CIO and CTO of the New York Times, importance of IT being involved and understanding the business vs. just being technology focused. We also discuss the bigger broader opportunity of serverless (aka micro services, containers) life beyond DevOps and how higher level business logic developers can benefit from the technology instead of just a DevOps for infrastructure focus. Buzzwords, buzz terms and themes include datacenter technologies, NY Times, data infrastructure, management, trends, metrics, digital transformation, tradecraft skills, DevOps, serverless among others. . Check out Nicks post The Futile Resistance to Serverless here. Listen to the podcast discussion here (MP3 16 minutes and 50 seconds) as well as on iTunes here. Where to learn more Learn more about Oracle, Database Performance, Benchmarking along with other tools via the following links: The Futile Resistance to Serverless (via Nick Rockwell blog) Nick Rockwell New York Times profile, Nick on Twitter (@NickRocksWell) Data Infrastructure Server Storage I/O related Tradecraft Overview Did you want a side of SLBS (server less BS) with your software or hardware FUD? Software Defined Data Infrastructure Essentials (CRC Press 2017) Server (various videos and podcasts, fun and for work) What this all means and wrap-up Check out my discussion here (MP3) with Nick Rockwell as we discuss IT and business transition, metrics, software development, and serverless life beyond DevOps. Also available on  Ok, nuff said, for now… Cheers Gs Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden. All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2017 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO. [...]

Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips

Sun, 26 November 2017 18:17:16GMT

Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips Companion to Software Defined Data Infrasture Essentials - Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft ( CRC Press 2017) By Greg Schulz - November 26, 2017 This is Part I of a multi-part series on Data Protection fundamental tools topics techniques terms technologies trends tradecraft tips as a follow up to my Data Protection Diaries series, as well as a companion to my new book Software Defined Data Infrastructure Essentials - Cloud, Converged, Virtual Server Storage I/O Fundamental tradecraft (CRC Press 2017). The focus of this series is around data protection including Data Infrastructure Services: Availability, RAS, RAID and Erasure Codes (including LRC) ( Chapter 9), Data Infrastructure Services: Availability, Recovery Point ( Chapter 10). Additional Data Protection related chapters include Storage Mediums and Component Devices ( Chapter 7), Management, Access, Tenancy, and Performance ( Chapter 8), as well as Capacity, Data Footprint Reduction ( Chapter 11), Storage Systems and Solutions Products and Cloud ( Chapter 12), Data Infrastructure and Software-Defined Management ( Chapter 13) among others. Post in the series includes excerpts from Software Defined Data Infrasture (SDDI) pertaining to data protection for legacy along with software defined data centers ( SDDC), data infrastructures in general along with related topics. In addition to excerpts, the posts also contain links to articles, tips, posts, videos, webinars, events and other companion material. Note that figure numbers in this series are those from the SDDI book and not in the order that they appear in the posts. Posts in this series include: Part 1 - Data Infrastructure Data Protection Fundamentals Part 2 - Reliability, Availability, Serviceability ( RAS) Data Protection Fundamentals Part 3 - Data Protection Access Availability RAID Erasure Codes ( EC) including LRC Part 4 - Data Protection Recovery Points (Archive, Backup, Snapshots, Versions) Part 5 - Point In Time Data Protection Granularity Points of Interest Part 6 - Data Protection Security Logical Physical Software Defined Part 7 - Data Protection Tools, Technologies, Toolbox, Buzzword Bingo Trends Part 8 - Data Protection Diaries Walking Data Protection Talk Part 9 - who's Doing What ( Toolbox Technology Tools) Part 10 - Data Protection Resources Where to Learn More Figure 1.5 Data Infrastructures and other IT Infrastructure Layers Data Infrastructures Data Infrastructures exists to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective. Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that make up data infrastructures include hardware, software, or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud. Read more about data infrastructures (its what's inside data centers) here. Various Needs Demand Drivers For Data Protection Why The Need For Data Protection Data Protection encompasses many different things, from accessibility, durability, resiliency, reliability, and serviceability ( RAS) to security and data protection along with consistency. Availability includes basic, high availability ( HA), business continuance ( BC), business resiliency ( BR), disaster recovery ( DR), archiving, backup, logical and physical security, fault tolerance, isolation and containment spanning systems, applic[...]

HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads

Mon, 20 November 2017 14:15:16GMT

HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads By Greg Schulz - November 20, 2017 HPE Announced today a new AMD EPYC 7000 Powered Gen 10 ProLiant DL385 for Software Defined Workloads including server virtualization, software-defined data center (SDDC), software-defined data infrastructure (SDDI), software-defined storage among others. These new servers are part of a broader Gen10 HPE portfolio of ProLiant DL systems. 24 Small Form Factor Drive front view DL385 Gen 10 Via HPE The value proposition being promoted by HPE of these new AMD powered Gen 10 DL385 servers besides supporting software-defined, SDDI, SDDC, and related workloads are security, density and lower price than others. HPE is claiming with the new AMD EPYC system on a chip (SoC) processor powered Gen 10 DL385 that it is offering up to 50 percent lower cost per virtual machine (VM) than traditional server solutions. About HPE AMD Powered Gen 10 DL385 HPE AMD EPYC 7000 Gen 10 DL385 features: 2U (height) form factor HPE OneView and iLO management Flexible HPE finance options Data Infrastructure Security AMD EPYC 7000 System on Chip (SoC) processors NVMe storage (Embedded M.2 and U.2/8639 Small Form Factor (SFF) e.g. drive form factor) Address server I/O and memory bottlenecks These new HPE servers are positioned for: Software Defined, Server Virtualization Virtual Desktop Infrastructure (VDI) workspaces HPC, Cloud and other general high-density workloads General Data Infrastructure workloads that benefit from memory-centric or GPUs Different AMD Powered DL385 ProLiant Gen 10 Packaging Options Common across AMD EPYC 7000 powered Gen 10 DL385 servers are 2U high form factor, iLO management software and interfaces, flexible LAN on Motherboard (LOM) options, MicroSD (optional dual MicroSD), NVMe (embedded M.2 and SFF U.2) server storage I/O interface and drives, health and status LEDs, GPU support, single or dual socket processors. HPE DL385 Gen10 Inside View Via HPE HPE DL385 Gen10 Rear View Via HPE Other up to three storage drive bays, support for Large Form Factor (LFF) and Small Form Factor (SFF) devices (HDD and SSD) including SFF NVMe (e.g., U.2) SSD. Up to 4 x Gbe NICs, PCIe riser for GPU (optional second riser requires the second processor). Other features and options include HPE SmartArray (RAID), up to 6 cooling fans, internal and external USB 3. Optional universal media bay that can also add a front display, optional Optical Disc Drive (ODD), optional 2 x U.2 NVMe SFF SSD. Note media bay occupies one of three storage drive bays. HPE DL385 Form Factor Via HPE Up to 3 x Drive Bays Up to 12 LFF drives (2 per bay) Up to 24 SFF drives ( 3 x 8 drive bays, 6 SFF + 2 NVMe U.2 or 8 x NVMe) AMD EPYC 7000 Series The AMD EPYC 7000 series is available in the single and dual socket. View additional AMD EPYC speeds and feeds in this data sheet (PDF), along with AMD server benchmarks here. HPE DL385 Gen 10 AMD EPYC Specifications Via HPE AMD EPYC 7000 General Features Single and dual socket Up to 32 cores, 64 threads per socket Up to 16 DDR4 DIMMS over eight channels per socket (e.g., up to 2TB RAM) Up to 128 PCIe Gen 3 lanes (e.g. combination of x4, x8, x16 etc) Future 128GB DIMM support AMD EPYC 7000 Security Features Secure processor and secure boot for malware rootkit protection System memory encryption (SME) Secure Encrypted Virtualization (SEV) hypervisors and guest virtual machine memory protection Secure move (e.g., encrypted) between enabled servers Where To Learn More Learn more about Data Infrastructure and related server technology, trends, tools, techniques, tradecraft and tips with the following links. AMD EPYC 7000 System on Chip (SoC) processors Gen10 HPE portfolio and ProLiant DL systems. Various Data Infrastructure related news commentary, events, tips [...]

AWS Announces New S3 Cloud Storage Security Encryption Features

Tue, 14 November 2017 19:19:19GMT

Amazon Web Services (AWS) recently announced new Simple Storage Service (S3) encryption and security enhancements including Default Encryption, Permission Checks, Cross-Region Replication ACL Overwrite, Cross-Region Replication with KMS and Detailed Inventory Report. Another recent announcement by AWS is for PrivateLinks endpoints within a Virtual Private Cloud (VPC). AWS Service Dashboard Default Encryption Extending previous security features, now you can mandate all objects stored in a given S3 bucket be encrypted without specifying a bucket policy that rejects non-encrypted objects. There are three server-side encryption (SSE) options for S3 objects including keys managed by S3, AWS KMS and SSE Customer ( SSE-C) managed keys. These options provide more flexibility as well as control for different environments along with increased granularity. Note that encryption can be forced on all objects in a bucket by specifying a bucket encryption configuration. When an unencrypted object is stored in an encrypted bucket, it will inherit the same encryption as the bucket, or, alternately specified by a PUT required. AWS S3 Buckets Permission Checks There is now an indicator on the S3 console dashboard prominently indicating which S3 buckets are publicly accessible. In the above image, some of my AWS S3 buckets are shown including one that is public facing. Note in the image above how there is a notion next to buckets that are open to public. Cross-Region Replication ACL Overwrite and KMS AWS Key Management Service (KMS) keys can be used for encrypting objects. Building on previous cross-region replication capabilities, now when you replicate objects across AWS accounts, a new ACL providing full access to the destination account can be specified. Detailed Inventory Report The S3 Inventory report ( which can also be encrypted) now includes the encryption status of each object. PrivateLink for AWS Services PrivateLinks enable AWS customers to access services from a VPC without using a public IP as well as traffic not having to go across the internet (e.g. keeps traffic within the AWS network. PrivateLink endpoints appear in Elastic Network Interface (ENI) with private IPs in your VPC and are highly available, resiliency and scalable. Besides scaling and resiliency, PrivateLink eliminates the need for white listing of public IPs as well as managing internet gateway, NAT and firewall proxies to connect to AWS services (Elastic Cloud Compute (EC2), Elastic Load Balancer (ELB), Kinesis Streams, Service Catalog, EC2 Systems Manager). Learn more about AWS PrivateLink for services here including  VPC Endpoint Pricing here.  Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) AWS S3 Storage Gateway Revisited (Part I) Amazon Web Service AWS September 2017 Software Defined Data Infrastructure Updates S3motion Buckets Containers Objects AWS S3 Cloud and EMCcode Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3) Cloud Conversations: AWS S3 Cross Region Replication storage enhancements Part II Revisiting AWS S3 Storage Gateway (Test Drive Deployment) Software Defined Data Infrastructure Essentials (CRC Press 2017) What This All Means Common cloud concern considerations include privacy and security. AWS S3 among other industry cloud service and storage providers have had their share of not so pleasant news coverage involving security. Keep in mind that data protection including security is a shared responsibility (and only you can prevent data loss). This means that the vendor or service provider has to take care of their responsibility making sure their solutions have proper data protection and security features by default, as well as extensions, and making those capabilities known to consumers. The[...]

October 2017 Server StorageIO Data Infrastructure Update Newsletter

Mon, 30 October 2017 23:23:23GMT

Server StorageIO October 2017 Data Infrastructure Update Newsletter [...]

Data Infrastructure server storage I/O network Recommended Reading #blogtober

Mon, 30 October 2017 22:22:22GMT

Data Infrastructure Recommended Reading List and Book Shelf Updated 10/30/17 The following is an evolving recommended reading list of data infrastructure topics including, server, storage I/O, networking, cloud, virtual, container, data protection and related topics that includes books, blogs, podcast's, events and industry links among other resources. Various Data Infrastructure including hardware, software, services related links: Links A-E Links F-J Links K-O Links P-T Links U-Z Other Links In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017), the following are Server StorageIO recommended reading, watching and listening list items. The list includes various IT, Data Infrastructure and related topics. Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Check out the Blogtober list of check out some of the blogs and posts occurring during October 2017 here. Preston De Guise aka @backupbear is Author of several books has an interesting new site that looks at topics including Ethics in IT among others. Check out his new book Data Protection: Ensuring Data Availability (CRC Press 2017) and available via here. Brendan Gregg has a great site for Linux performance related topics here. Greg Knieriemen has a must read weekly blog, post, column collection of whats going on in and around the IT and data infrastructure related industries, Check it out here. Interested in file systems, CIFS, SMB, SAMBA and related topics then check out Chris Hertels book on implementing CIFS here at For those involved with VMware, check out Frank Denneman VMware vSphere 6.5 host resource guide-book here at Docker: Up & Running: Shipping Reliable Containers in Production by Karl Matthias & Sean P. Kane via here. Essential Virtual SAN (VSAN): Administrator's Guide to VMware Virtual SAN,2nd ed. by Cormac Hogan & Duncan Epping via here. Hadoop: The Definitive Guide: Storage and Analysis at Internet Scale by Tom White via here. Systems Performance: Enterprise and the Cloud by Brendan Gregg Via here. Implementing Cloud Storage with OpenStack Swift by Amar Kapadia, Sreedhar Varma, & Kris Rajana Via here. The Human Face of Big Data by Rick Smolan & Jennifer Erwitt Via here. VMware vSphere 5.1 Clustering Deepdive (Vol. 1) by Duncan Epping & Frank Denneman Via here. Note: This is an older title, but there are still good fundamentals in it. Linux Administration: A Beginners Guide by Wale Soyinka Via here. TCP/IP Network Administration by Craig Hunt Via here. Cisco IOS Cookbook: Field tested solutions to Cisco Router Problems by Kevin Dooley and Ian Brown Via here. I often mention in presentations a must have for anybody involved with software defined anything, or programming for that matter which is the Niklaus Wirth classic Algorithms + Data Structures = Programs that you can get on here. Another great book to have is Seven Databases in Seven Weeks (here is a book review) which not only provides an overview of popular NoSQL databases such as Cassandra, Mongo, HBASE among others, lots of good examples and hands on guides. Get your copy here at Additional Data Infrastructure and related topic sites In addition to those mentioned above, other sites, venues and data infrastructure related resources include: - Archiving and records management trade group - Various open-source software - Scott Lowe VMware Networking and topics - Ben Arms[...]

PCIe Server Storage I/O Network Fundamentals #blogtober

Sat, 28 October 2017 23:13:03GMT

Peripheral Computer Interconnect Express aka PCIe is a Server, Storage, I/O networking fundamental component. This post is an excerpt from chapter 4 (Chapter 4: Servers: Physical, Virtual, Cloud, and Containers) of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017) Available via and other global venues. In this post, we look various PCIe fundamentals to learn and expand or refresh your server, storage, and I/O and networking tradecraft skills experience. PCIe fundamental common server I/O component Common to all servers is some form of a main system board, which can range from a few square meters in supercomputers, data center rack, tower, and micro towers converged or standalone, to small Intel NUC (Next Unit of Compute), MSI and Kepler-47 footprint, or Raspberry Pi-type desktop servers and laptops. Likewise, PCIe is commonly found in storage and networking systems, appliances among other devices. For example, a blade server will have multiple server blades or modules, each with its motherboard, which shares a common back plane for connectivity. Another variation is a large server such as an IBM “Z” mainframe, Cray, or another supercomputer that consists of many specialized boards that function similar to a smaller-sized motherboard on a larger scale. Some motherboards also have mezzanine or daughter boards for attachment of additional I/O networking or specialized devices. The following figure shows a generic example of a two-socket, with eight-memory-channel-type server architecture. Generic computer server hardware architecture. Source: Software Defined Data Infrastructure Essentials (CRC Press 2017) The above figure shows several PCIe, USB, SAS, SATA, 10 GbE LAN, and other I/O ports. Different servers will have various combinations of processor, and Dual Inline Memory Module (DIMM) Dynamic RAM (DRAM) sockets along with other features. What will also vary are the type and some I/O and storage expansion ports, power and cooling, along with management tools or included software. PCIe, Including Mini-PCIe, NVMe, U.2, M.2, and GPU At the heart of many servers I/O and connectivity solutions are the PCIe industry-standard interface (see PCIe is used to communicate with CPUs and the outside world of I/O networking devices. The importance of a faster and more efficient PCIe bus is to support more data moving in and out of servers while accessing fast external networks and storage. For example, a server with a 40-GbE NIC or adapter would have to have a PCIe port capable of 5 GB per second. If multiple 40-GbE ports are attached to a server, you can see where the need for faster PCIe interfaces come into play. As more VM are consolidated onto PM, as applications place more performance demand either regarding bandwidth or activity (IOPS, frames, or packets) per second, more 10-GbE adapters will be needed until the price of 40-GbE (also 25, 50 or 100 Gbe) becomes affordable. It is not if, but rather when you will grow into the performance needs on either a bandwidth/throughput basis or to support more activity and lower latency per interface. PCIe is a serial interface specified for how servers communicate between CPUs, memory, and motherboard-mounted as well as AiC devices. This communication includes support attachment of onboard and host bus adapter (HBA) server storage I/O networking devices such as Ethernet, Fibre Channel, InfiniBand, RapidIO, NVMe (cards, drives, and fabrics), SAS, and SATA, among other interfaces. In addition to supporting attachment of traditional LAN, SAN, MAN, and WAN devices, PCIe is also used for attaching GPU and video cards to servers. Traditionally, PCIe has been focused on being used inside of a given server chassis. Today, however, PCIe is being dep[...]

Introducing Windows Subsystem for Linux WSL Overview #blogtober

Wed, 25 October 2017 18:17:16GMT

Introducing Windows Subsystem for Linux WSL Overview #blogtober Introducing Windows Subsystem for Linux WSL and Overview. Microsoft has been increasing their support of Linux across Azure public cloud, Hyper-V and Linux Integration Services (LIS) and Windows platforms including Windows Subsystem for Linux (WSL) as well as Server along with Docker support. WSL with Ubuntu installed and open in a window on one of my Windows 10 systems. WSL is not a virtual machine (VM) running on Windows or Hyper-V, rather it is a subsystem that coexists next to win32 (read more about how it works and features, enhancements here). Once installed, WSL enables use of Linux bash shell along with familiar tools (find, grep, sed, awk, rsync among others) as well as services such as ssh, MySQL among others. What this all means is that if you work with both Windows and Linux, you can do so on the same desktop, laptop, server or system using your preferred commands. For example in one window you can be using Powershell or traditional Windows commands and tools, while in another window working with grep, find and other tools eliminating the need to install things such as wingrep among others. Installing WSL Depending on which release of Windows desktop or server you are running, there are a couple of different install paths. Since my Windows 10 is the most recent release (e.g. 1709) I was able to simply go to the Microsoft Windows Store via desktop, search for Windows Linux, select the distribution, install and launch. Microsoft has some useful information for installing WSL on different Windows version here, as well as for Windows Servers here. Get WSL from Windows Store or more information and options here. Click on Get the app Select desired WSL distribution Lests select SUSE as I already have Ubuntu installed (I have both) SUSE WSL in the process of downloading. Note SUSE needs an access code (free) that you get from while waiting for the download and install is a good time to get that code. Launching WSL with SUSE, you will be prompted to enter the code mentioned above, if you do not have a code, get it here from SUSE. The WSL installation is very straight forward, enter the SUSE code (Ubuntu did not need a code). Note the Ubuntu and SUSE WSL task bar icons circled bottom center. Provide a username for accessing the WSL bash shell along with password, confirm how root and sudo to be applied and that is it. Serious, the install for WSL at least with Windows 10 1709 is that fast and easy. Note in the above image, I have WSL with Ubuntu open in a window on the left, WSL with SUSE on the right, and their taskbar icons bottom center. Enable Windows Subsystem for Linux Feature on Windows If you get the above WSL error message 0x8007007e when installing WSL Ubuntu, SUSE or other shell distro, make sure to enable the Windows WSL feature if not already installed. One option is to install additional Windows features via settings or control panel. For example, Control panel -> Programs and features -> Turn Windows features on or off -> Check the box for Windows Subsystem for Linux Another option is to install Windows subsystem feature via Powershell for example. enable-windowsoptionalfeature -online -featurename microsoft-windows-subsystem-linux Using WSL Once you have WSL installed, try something simple such as view your present directory: pwd Then look at the Windows C: drive location ls /mnt/c -al In case you did not notice the above, you can use Windows files and folders from the bash shell by placing /mnt in front of the device path. Note that you need to be case-sensitive such as User vs. user or Documents vs. documents. As a further example, I needed to change several .ht[...]

Fixing the Windows 10 1709 post upgrade restart loop

Mon, 23 October 2017 23:23:23GMT

Recently I needed to upgrad one of my systems to Microsoft Windows 10 1709 (e.g. the September 2017) release that post upgrade resulted in Windows Explorer, desktop and taskbar going into an endless loop. For those not familiar with Windows 10 1709 learn more here, and here including on how to get the bits (e.g. software). Windows 10 1709 is a semi-annual channel (SAC) Microsoft is following to enable a faster cadence or pace of releases making new features available faster. Note that there is a Windows 10 1709 SAC, as well as Windows Server 2017 SAC (more on that here). All was well with the 1709 install on Windows 10 until post upgrade when I logged into my account on my laptop (Lenovo X1). Once logged in initially everything looked good until about 10 to 20 seconds later, the screen flickered, the desktop refreshed as did the taskbar. All was well for about another 10 to 20 seconds and again the desktop refreshed as did the taskbar. Trying to use the Windows key plus other keys was no success, likewise trying to use command prompt, Powershell or other tools was futile given how quick the refresh occurred. Powering off the system and rebooting seemed normal, until once logged in and again the desktop and taskbar reset in the same looping fashion. Once again did a shutdown and restart, logged in and the same result. The Safe Mode Fix Unless you can access a command prompt or Powershell with administrator privileges, boot into Windows Safe mode. The solution to the post Windows 10 1709 upgrade desktop and taskbar restart loop was to boot into safe mode and run the following three commands. sfc /scannow dism.exe /online /cleanup-image /scanhealth dism.exe /online /cleanup-image /restorehealth Before you can run the above commands, access Windows Safe Mode. Tip if your Windows 10 system presents a login screen, in the lower right corner select the Shutdown, Restart icon holding down the SHIFT key and select Restart. Your system should reboot presenting you with the following options, selecting Troubleshoot. Next select Advanced options shown below. Next select Startup Settings shown below. Note that this sequence of commands are also used for other troubleshooting scenarios including boot problems, restore image or to a previous protection point among other options. The following Startup Settings screen appears, select Restart to enter Safe Mode. Your system should then present the following options, select Safe Mode with Command Prompt (option 6). Next your system should display a Command Prompt where the following three commands are run: sfc /scannow dism.exe /online /cleanup-image /scanhealth dism.exe /online /cleanup-image /restorehealth Exit, shutdown, reboot and all should be good. Some Tips and Recommendations Before any upgrade, make sure you have good backups to enable various recovery points if needed. If you have not done so recently, make sure you have system restore enabled, as well as underlying hypervisors or storage system snapshot. If you have bitlocker enabled, before you do any upgrade, make sure to have a copy of your keys handy if you need to use them. If you rely on PIN or fingerprint for login, make sure you have your real password handy. If you have not done so recently, make sure your secondary standby emergency access account is working, if you dont have one, create one. Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. Cloud Conversations Azure AWS Service Maps via Microsoft Microsoft Azure September 2017 Software Defined Data Infrastructure Updates Microsoft Windows Server, Azure, Nano Life cycle Updates Overview Review of Microsoft ReFS (Reliable File System) and resource links Software Defined Data Infrastructure [...]

Cloud Conversations Azure AWS Service Maps via Microsoft

Wed, 18 October 2017 10:11:22GMT

Cloud Conversations Azure AWS Service Maps via Microsoft Microsoft has created an Azure and Amazon Web Service (AWS) Service Map (corresponding services from both providers). Image via Note that this is an evolving work in progress from Microsoft and use it as a tool to help position the different services from Azure and AWS. Also note that not all features or services may not be available in different regions, visit Azure and AWS sites to see current availability. As with any comparison they are often dated the day they are posted hence this is a work in progress. If you are looking for another Microsoft created why Azure vs. AWS then check out this here. If you are looking for an AWS vs. Azure, do a simple Google (or Bing) search and watch all the various items appear, some sponsored, some not so sponsored among others. Whats In the Service Map The following AWS and Azure services are mapped: Marketplace (e.g. where you select service offerings) Compute (Virtual Machines instances, Containers, Virtual Private Servers, Serverless Microservices and Management) Storage (Primary, Secondary, Archive, Premium SSD and HDD, Block, File, Object/Blobs, Tables, Queues, Import/Export, Bulk transfer, Backup, Data Protection, Disaster Recovery, Gateways) Network & Content Delivery (Virtual networking, virtual private networks and virtual private cloud, domain name services (DNS), content delivery network (CDN), load balancing, direct connect, edge, alerts) Database (Relational, SQL and NoSQL document and key value, caching, database migration) Analytics and Big Data (data warehouse, data lake, data processing, real-time and batch, data orchestration, data platforms, analytics) Intelligence and IoT (IoT hub and gateways, speech recognition, visualization, search, machine learning, AI) Management and Monitoring (management, monitoring, advisor, DevOps) Mobile Services (management, monitoring, administration) Security, Identity and Access (Security, directory services, compliance, authorization, authentication, encryption, firewall Developer Tools (workflow, messaging, email, API management, media trans coding, development tools, testing, DevOps) Enterprise Integration (application integration, content management) Down load a PDF version of the service map from Microsoft here. Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. What's a data infrastructure? (Via NetworkWorld) Introduction to Azure Files (Microsoft Docs) Public preview of Virtual Network service endpoints for Azure storage and SQL database Azure debuts Availability Zones for high availability and resiliency (Azure) Azure and AWS comparison (Docs Microsoft) Cloud Storage Considerations (via WServerNews) Microsoft Windows Server, Azure, Nano Life cycle Updates Overview Review of Microsoft ReFS (Reliable File System) and resource links Software Defined Data Infrastructure Essentials (CRC Press) book companion page Amazon Web Service AWS September 2017 Updates AWS S3 Storage Gateway Revisited (Part I) Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3) Microsoft September 2017 Updates What this means On one hand this can and will likely be used as a comparison however use caution as both Azure and AWS services are rapidly evolving, adding new features, extending others. Likewise the service regions and site of data centers also continue to evolve thus use the above as a general guide or tool to help map what service offerings are similar between AWS and Azure. By the way, if you have not heard, its Blogtober, check out some of the other blog[...]

September 2017 Server StorageIO Data Infrastructure Update Newsletter

Wed, 11 October 2017 00:11:22GMT

Server StorageIO September 2017 Data Infrastructure Update Newsletter [...]

Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

Wed, 11 October 2017 00:11:22GMT

Microsoft September 2017 Software Defined Data Infrastructure Updates Microsoft September 2017 Software Defined Data infrastructure Updates September was a busy month for data infrastructure topics as well as Microsoft in terms of new and enhanced technologies. Wrapping up September was Microsoft Ignite where Azure, Azure Stack, Windows, O365, AI, IoT, development tools announcements occurred, along with others from earlier in the month. As part of the September announcements, Microsoft released a new version of Windows server (e.g. 1709) that has a focus for enhanced container support. Note that if you have deployed Storage Spaces Direct (S2D) and are looking to upgrade to 1709, do your homework as there are some caveats that will cause you to wait for the next release. Note that there had been new storage related enhancements slated for the September update, however those were announced at Ignite to being pushed to the next semi-annual release. Learn more here and also here. Azure Files and NFS Microsoft made several Azure file storage related announcements and public previews during September including Native NFS based file sharing as companion to existing Azure Files, along with public preview of new Azure File Sync Service. Native NFS based file sharing (public preview announced, service is slated to be available in 2018) is a software defined storage deployment of NetApp OnTAP running on top of Azure data infrastructure including virtual machines and leverage Azure underlying storage. Note that the new native NFS is in addition to the earlier native Azure Files accessed via HTTP REST and SMB3 enabling sharing of files inside Azure public cloud, as well as accessible externally from Windows based and Linux platforms including on premises. Learn more about Azure Storage and Azure Files here. Azure File Sync (AFS) Azure File Sync (AFS) has now entered public preview. While users of Windows-based systems have been able to access and share Azure Files in the past, AFS is something different. I have used AFS for some time now during several private preview iterations having seen how it has evolved, along with how Microsoft listens incorporating feedback into the solution. Lets take a look at what is AFS, what it does, how it works, where and when to use it among other considerations. With AFS, different and independent systems can now synchronize file shares through Azure. Currently in the AFS preview Windows Server 2012 and 2016 are supported including bare metal, virtual, and cloud based. For example I have had bare metal, virtual (VMware), cloud (Azure and AWS) as part of participating in a file sync activities using AFS. Not to be confused with some other storage related AFS including Andrew File System among others, the new Microsoft Azure File Sync service enables files to be synchronized across different servers via Azure. This is different then the previous available Azure File Share service that enables files stored in Azure cloud storage to be accessed via Windows and Linux systems within Azure, as well as natively by Windows platforms outside of Azure. Likewise this is different from the recently announced Microsoft Azure native NFS file sharing serving service in partnership with NetApp (e.g. powered by OnTAP cloud). AFS can be used to synchronize across different on premise as well as cloud servers that can also function as cache. What this means is that for Windows work folders served via different on premise servers, those files can be synchronized across Azure to other locations. Besides providing a cache, cloud tiering and enterprise file sync share (EFSS) capabilities, AFS also has robust optimization for data movement to and from the cloud[...]

Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

Wed, 11 October 2017 00:11:22GMT

Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates September was a busy month including VMworld in Las Vegas that featured many Dell EMC VMware (among other) software defined data infrastructure updates and announcements. A summary of September VMware (and partner) related announcements include: Workspace, security and endpoint solutions Pivotal Container Service (PKS) with Google for Kubernetes serverless container management, DXC partnership for hybrid cloud management Security enablement via AppDefense Data infrastructure platform enhancements (integrated OpenStack, vRealize management tools, vSAN) Multi-cloud and hybrid cloud support along with VMware on AWS Dell EMC data protection for VMware and AWS environments. VMware and AWS via Amazon Web Services VMware and AWS Some of you might recall VMware earlier attempt at public cloud with vCloud Air service (see Server StorageIO lab test drive here) which has since been depreciated (e.g. retired). This new approach by VMware leverages the large global presence of AWS enabling customers to set up public or hybrid vSphere, vSAN and NSX based clouds, as well as software defined data centers (SDDC) and software defined data infrastructures (SDDI). VMware Cloud on AWS exists on a dedicated, single-tenant (unlike Elastic Cloud Compute (EC2) multi-tenant instances or VMs) that supports from 4 to 16 underlying host per cluster. Unlike EC2 virtual machine instances, VMware Cloud on AWS is delivered on elastic bare-metal (e.g. dedicated private servers aka DPS). Note AWS EC2 is more commonly known, AWS also has other options for server compute including Lambda micro services serverless containers, as well as Lightsail virtual private servers (VPS). Besides servers with storage optimized I/O featuring low latency NVMe accessed SSDs, and applicable underlying server I/O networking, VMware Cloud on AWS leverages the VMware software stack directly on underlying host servers (e.g. there is no virtualization nesting taking place). This means more robust performance should be expected like in your on premise VMware environment. VM workloads can move between your onsite VMware systems and VMware Cloud on AWS using various tools. The VMware Cloud on AWS is delivered and managed by VMware, including pricing. Learn more about VMware Cloud on AWS here, and here (VMware PDF) and here (VMware Hands On Lab aka HOL). Read more about AWS September news and related updates here in this StorageIOblog post. VMware and Pivotal PKS via Pivotal Container Service (PKS) and Google Kubernetes Partnership During VMworld VMware, Pivotal and Google announced a partnership for enabling Kubernetes container management called PKS (Pivotal Container Service). Kubernetes is evolving as a popular open source container microservice serverless management orchestration platform that has roots within Google. What this means is that what is good for Google and others for managing containers, is now good for VMware and Pivotal. In related news, VMware has become a platinum sponsor of the Cloud Native Compute Foundation (CNCF). If you are not familiar with CNCF, add it to your vocabulary and learn more here at Other VMworld and September VMware related announcements Hyper converged data infrastructure provider Maxta has announced a VMware vSphere Escape Pod (parachute not included ;) ) to facilitate migration from ESXi based to Red Hat Linux hypervisor environments. IBM and VMware for cloud partnership, along with Dell EMC, IBM and VMware joint cloud solutions. White listing of VMware vSphere [...]

Amazon Web Service AWS September 2017 Software Defined Data Infrastructure Updates

Wed, 11 October 2017 00:11:22GMT

Amazon Web Service AWS September 2017 Software Defined Data Infrastructure Updates Amazon Web Service AWS September 2017 Software Defined Data Infrasture Updates September was a busy month pertaining to software defined data infrastructure including cloud and related AWS announcements. One of the announcements included VMware partnering to deliver vSphere, vSAN and NSX data infrastructure components for creating software defined data centers (SDDC) also known as multi cloud, and hybrid cloud leveraging AWS elastic bare metal servers (read more here in a companion post). Unlike traditional partner software defined solutions that relied on AWS Elastic Cloud Compute (EC2) instances, VMware is being deployed using private bare metal AWS elastic servers. What this means is that VMware vSphere (e.g. ESXi) hypervisor, vCenter, software defined storage (vSAN), storage defined network (NSX) and associated vRealize tools are deployed on AWS data infrastructure that can be used for deploying hybrid software defined data centers (e.g. connecting to your existing VMware environment). Learn more about VMware on AWS here or click on the following image. Additional AWS Updates Amazon Web Services (AWS) updates include, coinciding with VMworld, the initial availability of VMware on AWS (using virtual private servers e.g. think along the lines of Lightsail, not EC2 instances) was announced. Amazon Web Services (AWS) continues its expansion into database and table services with Relational Data Services (RDS) including various engines (Amazon Auora,MariaDB, MySQL, Oracle, PostgreSQL,and SQL Server along with Database Migration Service (DMS). Note that these RDS are in addition to what you can install and run your self on Elastic Cloud Compute (EC2) virtual machine instances, Lambda serverless containers, or Lightsail Virtual Private Servers (VPS). AWS has published a guide to database testing on Amazon RDS for Oracle plotting latency and IOPs for OLTP workloads here using SLOB. If you are not familiar with SLOB (Silly Little Oracle Benchmark) here is a podcast with its creator Kevin Closson discussing database performance and related topics. Learn more about SLOB and step by step installation for AWS RDS Oracle here, and for those who are concerned or think that you can not run workloads to evaluate Oracle platforms, have a look at this here. EC2 enhancements include charging by the second (previous by the hour) for some EC2 instances (see details here including what is or is not currently available) which is a growing trend by private cloud vendors aligning with how serverless containers have been billed. New large memory EC2 instances that for example support up to 3,904GB of DDR4 RAM have been added by AWS. Other EC2 enhancements include updated network performance for some instances, OpenCL development environment to leverage AWS F1 FPGA enabled instances, along with new Elastic GPU enabled instances. Other server and network enhancements include Network Load Balancer for Elastic Load Balancer announced, as well as application load balancer now supports load balancing to IP address as targets for AWS and on premises (e.g. hybrid) resources. Other updates and announces include data protection backups to AWS via Commvault and AWS Storage Gateway VTL announced. IBM has announced their Spectrum Scale (e.g. formerly known as SONAS aka GPFS) Scale Out Storage solution for high performance compute (HPC) quick start on AWS. Additional AWS enhancements include new edge location in Boston and a third Seattle site, while Direct Connect sites have been added in Boston and Houston along with Canberra Australia. View more AWS announcements and enhanceme[...]

Getting Caught Up What Happened To September 2017

Thu, 28 September 2017 23:23:23GMT

Getting Caught Up What Happened To September 2017 Getting Caught Up, What Happened In September? Seems like just yesterday it was the end of August with the start of VMworld in Las Vegas, now its the end of September and Microsoft Ignite in Orlando is wrapping up. Microsoft has made several announcements this week at Ignite including Azure cloud related, AI, IoT, Windows platforms, O365 among others. More about Microsoft Azure, Azure Stack, Windows Server, Hyper-V and related data infrastructure topics in future posts. Like many of you, September is a busy time of the year, so here is a recap of some of what I have been doing for the past month (among other things). VMworld Las Vegas During VMworld US VMware announced enhanced workspace, security and endpoint solutions, Pivotal Container Service (PKS) with Google for Kubernetes serverless container management, DXC partnership for hybrid cloud management, security enablement via its AppDefense solutions, data infrastructure platform enhancements including integrated OpenStack, vRealize management tools, vSAN among others. VMware also made announcements including expanded multi-cloud and hybrid cloud support along with VMware on AWS as well as Dell EMC data protection for VMware and AWS environments. Software Defined Data Infrastructure Essentials (CRC Press) at VMworld bookstore In other VMworld activity, my new book Software Defined Data Infrastructure Essentials (CRC Press) made its public debut in the VMware book store where I did a book signing event. You can get your copy of Software Defined Data Infrastructure Essentials which includes Software Defined Data Centers (SDDC) along with hybrid, multi-cloud, serverless, converged and related topics at Amazon among other venues. Learn more here. Software Defined Everything (x) In early September I was invited to present at the Wipro Software Defined Everything (x) event in New York City. This event follows Wipro invited me to present at in London England this past January at the inaugural SDx Summit event. At the New York City event my presentation was Planning and Enabling Your Journey to SDx which bridged the higher level big picture industry trends to the applied feet on the ground topics. Attendees of the event included customers, prospects, partners, various analyst firms along with Wipro personal. At the Wipro event during a panel discussion a question was asked about definition of software defined. After the usual vendor and industry responses, mine was a simple, put the emphasis on Define as opposed to software, with a focus on what is the resulting outcome. In other words how and what are you defining (e.g. x) which could be storage, server, data center, data infrastructure, network among others to make a particular result, outcome, service or capability. While the emphasis is around defined, that also can mean curate, compose, craft, program or whatever you prefer to create an outcome. Role of Storage in a Software Defined Data Infrastructure At the Storage Network Industry Association (SNIA) Storage Developers Conference (SDC) in Santa Clara I did a talk about the role of Storage in Software Defined Data Infrastructures. The theme was that not only is there a role, storage is fundamental and essential for any software defined data infrastructure (as well as legacy) from cloud to container, serverless to virtual servers, converged and hybrid among others. Other themes included the changing role of storage along with how hardware needs software, software needs hardware, and serverless has hardware and software somewhere in the stack. Tradecraft along with other related data [...]

August 2017 Server StorageIO Data Infrastructure Update Newsletter

Sun, 27 August 2017 17:16:15GMT

Server StorageIO August 2017 Data Infrastructure Update Newsletter [...]

Hot Popular New Trending Data Infrastructure Vendors To Watch

Sat, 26 August 2017 23:23:23GMT

Hot Popular New Trending Data Infrastructure Vendors To Watch A common question I get asked is who are the hot popular new or trending data infrastructure vendors to watch. Keep in mind that there is a difference between industry adoption and customer deployment, the former being what the industry (e.g. Vendors, resellers, integrators, investors, consultants, analyst, press, media, analysts, bloggers or other influences) like, want and need to talk about. Then there is customer adoption and deployment which is what is being bought, installed and used. Some Popular Trending Vendors To Watch The following is far from an exhaustive list however here are some that come to mind that I'm watching. Apcera – Enterprise class containers and management tools AWS – Rolls our new services like a startup with size momentum of a legacy player Blue Medora – Data Infrastructure insight, software defined management Broadcom – Avago/LSI, legacy Broadcom, Emulex, Brocade acquisition interesting portfolio Chelsio – Server, storage and data Infrastructure I/O technologies Commvault - Data protection and backup solutions Compuverde – Software defined storage Data Direct Networks (DDN) – Scale out and high performance storage Datadog – Software defined management, data infrastructure insight, analytics, reporting Datrium – Converged software defined data infrastructure solutions Dell EMC Code – Rexray container persistent storage management Docker – Container and management tools E8 Storage – NVMe based storage solutions Elastifile – Scale out software defined storage and file system Enmotus - MicroTiering that works with Windows, Linux and various cloud platforms Everspin - storage class memories and NVDIMM Excelero – NVMe based storage Hedvig – Scale out software defined storage Huawei – While not common in the US, in Europe and elsewhere they are gaining momentum Intel – Watch what they do with Optane and storage class memories Kubernetes – Container software defined management Liqid – Stealth Colorado startup focusing on PCIe fabrics and composable infrastructure Maxta – Hyper converged infrastructure (HCI) and software defined data infrastructure vendor Mellanox – While not a startup, keep an eye on what they are doing with their adapters Micron – Watch what they do with 3D XPoint storage class memory and SSD Microsoft – Not a startup, however keep an eye on Azure, Azure Stack, Window Server with S2D, ReFS, tiering, CI/HCI as well as Linux services on Windows. Minio – Software defined storage solutions NetApp – While FAS/Ontap and Solidfire get the headlines, E series generates revenue, keep an eye on StorageGrid and AltaVault Neuvector – Container management and security Noobaa – Software defined storage and more NVIDA – No longer just another graphics process unit based company Pivot3 – An original HCI software defined players, granted, some of their competitors might not think so Pluribus Networks – Software Defined Networks for Software Defined Data Infrastructures Portwork – Container management and persistent storage Rozo Systems – Scale out software defined storage and file system Rubrik – Data Protection software, reminds me of a startup called Commvault 20 years ago. ScaleMP – Compos[...]

Travel Fun Crossword Puzzle For VMworld 2017 Las Vegas

Thu, 24 August 2017 18:18:18GMT

Travel Fun Crossword Puzzle For VMworld 2017 Las Vegas Some of you may be traveling to VMworld 2017 in Las Vegas next week to sharpen, expand, refresh or share your VMware and data infrastructure tradecraft (skills, experiences, expertise, knowledge). Here is something fun to sharpen your VMware skills while traveling. Most of these should be pretty easy meaning that you do not have to be a Unicorn, full of vCertifications, vCredentials or a 9, 8, 7, 6, 5, 4, 3, 2 or 1st time vExpert or top 100 vBlogger. However if you need the answers they are below. Note that you can also click here to get a PDF version that is larger (or click on the image) that also has the answers. For those of you who will be in Las Vegas at VMworld next week, stop by the VMworld Book Store at 1PM on Tuesday (the 29th) where I will be doing a book signing event for my new book Software Defined Data Infrastructure Essentials (CRC Press), stop by and say hello. Note there are also Kindle and other electronic versions of my new SDDI Essentials Book on and other venues if you need something to read during your upcoming travels. Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. Whats a data infrastructure? (Via NetworkWorld) Software Defined Data Infrastructure Essentials (CRC Press) book companion page (includes various images, sample figures and added content) Object and Cloud Storage Center ( - NVM, flash, SSD, SCM and related topics - NVM Express (NVMe) related topics Answers to the crossword puzzle (PDF) VMworld 2017 site Various IT and Cloud Infrastructure Layers including Data Infrastructures What This All Means Have a safe and fun trip on your way to Las Vegas for next weeks VMworld, enjoy the crossword puzzle, and if you need the answers, they are located here (PDF), see you at VMworld 2017 in Last Vegas. Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2017 Server StorageIO(R) and UnlimitedIO All Rights Reserved [...]

Announcing Software Defined Data Infrastructure Essentials Book by Greg Schulz

Wed, 23 August 2017 19:18:17GMT

New SDDI Essentials Book by Greg Schulz of Server StorageIO SDDC, Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft Over the past several months I have posted, commenting, presenting and discussing more about Data Infrastructures and my new book (my 4th solo project) officially announced today, Software Defined Data Infrastructure Essentials (CRC Press). Software Defined Data Infrastructure (SDDI) Essentials is now generally available at various global venues in hardcopy, hardback print as well as various electronic versions including via Amazon and CRC Press among others. For those attending VMworld 2017 in Las Vegas, I will be doing a book signing, meet and greet at 1PM Tuesday August 29 in the VMworld book store, as well as presenting at various other fall industry events. Software Defined Data Infrastructure (SDDI) Announcement (Via Businesswire) Stillwater, Minnesota – August 23, 2017  – Server StorageIO, a leading independent IT industry advisory and consultancy firm, in conjunction with publisher CRC Press, a Taylor and Francis imprint, announced the release and general availability of “Software-Defined Data Infrastructure Essentials,” a new book by Greg Schulz, noted author and Server StorageIO founder. The Software Defined Data Infrastructure Essentials book covers physical, cloud, converged (and hyper-converged), container, and virtual server storage I/O networking technologies, revealing trends, tools, techniques, and tradecraft skills. Various IT and Cloud Infrastructure Layers including Data Infrastructures From cloud web scale to enterprise and small environments, IoT to database, software-defined data center (SDDC) to converged and container servers, flash solid state devices (SSD) to storage and I/O networking,, the book helps develop or refine hardware, software, services and management experiences, providing real-world examples for those involved with or looking to expand their data infrastructure education knowledge and tradecraft skills. Software Defined Data Infrastructure Essentials book topics include: Cloud, Converged, Container, and Virtual Server Storage I/O networking Data protection (archive, availability, backup, BC/DR, snapshot, security) Block, file, object, structured, unstructured and data value Analytics, monitoring, reporting, and management metrics Industry trends, tools, techniques, decision making Local, remote server, storage and network I/O troubleshooting Performance, availability, capacity and  economics (PACE) Where To Purchase Your Copy Order via and CRC Press along with Google Books among other global venues. What People Are Saying About Software Defined Data Infrastructure Essentials Book “From CIOs to operations, sales to engineering, this book is a comprehensive reference, a must-read for IT infrastructure professionals, beginners to seasoned experts,” said Tom Becchetti, advisory systems engineer. "We had a front row seat watching Greg present live in our education workshop seminar sessions for ITC professionals in the Netherlands material that is in this book. We recommend this amazing book to expand your converged and data infrastructure knowledge from beginners to industry veterans." Gert and Frank Brouwer - Brouwer Storage Consultancy "Software-Defined Data Infrastructures provides the foundational building blocks to improve your craft in several areas includ[...]

Chelsio Storage over IP and other Networks Enable Data Infrastructures

Fri, 18 August 2017 18:17:16GMT

Chelsio and Storage over IP (SoIP) continue to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged. This past week I had a chance to visit with Chelsio to discuss data infrastructures, server storage I/O networking along with other related topics. More on Chelsio later in this post, however, for now lets take a quick step back and refresh what is SoIP (Storage over IP) along with Storage over Ethernet (among other networks). Various IT and Cloud Infrastructure Layers including Data Infrastructures Server Storage over IP Revisited There are many variations of SoIP from network attached storage (NAS) file based processing including NFS, SAMBA/SMB (aka Windows File sharing) among others. In addition there is various block such as SCSI over IP (e.g. iSCSI), along with object via HTTP/HTTPS, not to mention the buzzword bingo list of RoCE, iSER, iWARP, RDMA, DDPK, FTP, FCoE, IFCP, and SMB3 direct to name a few. Who is Chelsio For those who are not aware or need a refresher, Chelsio is involved with enabling server storage I/O by creating ASICs (Application Specific Integrated Circuits) that do various functions offloading those from the host server processor. What this means for some is a throw back to the early 2000s of the TCP Offload Engine (TOE) era where various processing to handle regular along with iSCSI and other storage over Ethernet and IP could be accelerated. Chelsio ecosystem across different data infrastructure focus areas and application workloads As seen in the image above, certainly there is a server and storage I/O network play with Chelsio, along with traffic management, packet inspection, security (encryption, SSL and other offload), traditional, commercial, web, high performance compute (HPC) along with high profit or productivity compute (the other HPC). Chelsio also enables data infrastructures that are part of physical bare metal (BM), software defined virtual, container, cloud, serverless among others. The above image shows how Chelsio enables initiators on server and storage appliances as well as targets via various storage over IP (or Ethernet) protocols. Chelsio also plays in several different sectors from *NIX to Windows, Cloud to Containers, Various processor architectures and hypervisors. Besides diverse server storage I/O enabling capabilities across various data infrastructure environments, what caught my eye with Chelsio is how far they, and storage over IP have progressed over the past decade (or more). Granted there are faster underlying networks today, however the offload and specialized chip sets (e.g. ASICs) have also progressed as seen in the above and next series of images via Chelsio. The above showing TCP and UDP acceleration, the following show Microsoft SMB 3.1.1 performance something important for doing Storage Spaces Direct (S2D) and Windows-based Converged Infrastructure (CI) along with Hyper Converged Infrastructures (HCI) deployments. Something else that caught my eye was iSCSI performance which in the following shows 4 initiators accessing a single target doing about 4 million IOPs (reads and writes), various size and configurations. Granted that is with a 100Gb network interface, however it also shows that potential bottlenecks are removed enabling that faster network to be more effectively used. Moving on from TCP, UDP and iSCSI, NVMe and in particular NVMe over Fabric (NVMeoF) have become popular industry topics so check out the follow[...]

Microsoft Azure Cloud Software Defined Data Infrastructure Reference Architecture Resources

Thu, 17 August 2017 23:23:23GMT

Microsoft Azure Cloud Software Defined Data Infrastructure Reference Architecture Resources Need to learn more about Microsoft Azure Cloud Software Defined Data Infrastructure topics including reference architecture among other resources for various application workloads? Microsoft Azure has an architecture and resources page (here) that includes various application workload reference tools. Azure Reference Architectures via Microsoft Azure Examples of some Azure Reference Architecture for various application and workloads include among others: Sharepoint High Availability (HA) Windows Virtual Machines (standalone, availability and other options) Linux Virtual Machines (standalone, availability and other options) Managed Web Applications including Multi Region SAP Deployments For example, need to know how to configure a high availability (HA) Sharepoint deployment with Azure, then check out this reference architecture shown below. Sharepoint HA via Microsoft Azure Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. Cloud Storage Decision Making and and more here Microsoft Windows Server, Azure, Nano Life cycle Updates Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 Overview Review of Microsoft ReFS (Reliable File System) and resource links Dell EMC Announce Azure Stack Hybrid Cloud Solution Azure Stack Technical Preview 3 (TP3) Overview Preview Review Microsoft Azure Architecture and Reference Resources NVMe related and flash SSD along with cloud, bulk, object storage topics Software Defined Data Infrastructure Essentials (CRC Press). Various IT and Cloud Infrastructure Layers including Data Infrastructures What This All Means Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. Software Defined Data Infrastructures span legacy, virtual, container, cloud and other environments to support various application workloads. Check out the Microsoft Azure cloud reference architecture and resources mentioned above as well as the Azure Free trial and getting started site here. Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2017 Server StorageIO(R) and UnlimitedIO All Rights Reserved [...]

Like Data They Protect For Now Quantum Revenues Continue To Grow

Mon, 14 August 2017 14:13:12GMT

For Now Quantum Revenues Continue To Grow For Now Quantum Revenues Continue To Grow. The other day following their formal announced, I received an summary update from Quantum pertaining to their recent Q1 Results (show later below). Various IT and Cloud Infrastructure Layers including Data Infrastructures Quantums Revenues Continue To Grow Like Data One of the certainties in life is change and the other is continued growth in data that gets transformed into information via IT and other applications. Data Infrastructures fundamental role is to enable an environment for applications and data to be transformed into information and delivered as services. In other words, Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. Quantums role is to provide solutions and technologies for enabling legacy and cloud or other software defined data infrastructures to protect, preserve, secure and serve data. What caught my eye in Quantums announcements was that while not earth shattering growth numbers normally associated with a hot startup, being a legacy data infrasture and storage vendor, Quantum's numbers are hanging in there. At a time when some legacy as well as startups struggle with increased competition from others including cloud, Quantum appears for at least now to be hanging in there with some gains. The other thing that caught my eye is that most of the growth not surprisingly is non tape related solutions, particular around their bulk scale out StorNext storage solutions, there is some growth in tape. Here is the excerpt of what Quantum sent out: Highlights for the quarter (all comparisons are to the same period a year ago): • Grew total revenue and generated profit for 5th consecutive quarter • Total revenue was up slightly to $117M, with 3% increase in branded revenue • Generated operating profit of $1M with earnings per share of 4 cents, up 2 cents • Grew scale-out tiered storage revenue 10% to $34M, with strong growth in video surveillance and technical workflows o Key surveillance wins included deals with an Asian government for surveillance at a presidential palace and other government facilities, with a major U.S. port and with four new police department customers o Established several new surveillance partnerships – one of top three resellers/integrators in China (Uniview) and two major U.S. integrators (Protection 1 and Kratos) o Won two surveillance awards for StorNext – Security Industry Association’s New Product Showcase award and Security Today magazine’s Platinum Govies Government Security award o Key technical workflow wins included deals at an international defense and aerospace company to expand StorNext archive environment, a leading biotechnology firm for 1 PB genomic sequencing archive, a top automaker involving autonomous driving research data and a U.S. technology institute involving high performance computing o Announced StorNext 6, which adds new advanced data management features to StorNext’s industry-leading performance and is now shipping o Announced scale-out partnerships with Veritone on artificial intelligence and DataFrameworks on data visualization and management • Tape automation, devices and media revenue increased 6% overall while branded revenue for this product category was up 14% o Strong sales of newest generation Scalar i3 and i6 t[...]

Server StorageIO Industry Trends Perspectives Report WekaIO Matrix

Sat, 12 August 2017 15:14:13GMT

Server StorageIO Industry Trends Perspectives Report WekaIO Matrix

WekaIO is a scale out software defined storage startup vendor whose solution is Matrix.


This Server StorageIO Industry Trends Perspective report looks at common issues, trends, and how to address different application server storage I/O challenges. In this report, we look at WekaIO Matrix, an elastic, flexible, highly scalable easy to use (and manage) software defined (e.g. software based) storage solution. WekaIO Matrix enables flexible elastic scaling with stability and without compromise.

Matrix is a new scale out software defined storage solution that:
  • Installs on bare metal, virtual or cloud servers
  • Has POSIX, NFS, SMB, and HDFS storage access
  • Adaptable performance for little and big data
  • Tiering of flash SSD and cloud object storage
  • Distributed resilience without compromise
  • Removes complexity of traditional storage
  • Deploys on bare metal, virtual and cloud environments

Read more in this (free, no registration required) Server StorageIO Industry Trends Perspective (ITP) WekaIO Matrix Report compliments of WekaIO.

Ok, nuff said, for now.

Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2017 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Like IT Data Centers Do You Take Trade Show Exhibit Infrastructure For Granted?

Fri, 11 August 2017 16:15:14GMT

Do You Take Trade Show Exhibit Infrastructure For Granted? Think about this for a moment; do you assume that Information Technology (IT) and Cloud based data centers along with their associated Data Infrastructure supporting various applications will be accessible when needed. Likewise, when you go to a trade show, conference, symposium, user group or another conclave is it assumed that the trade show, exposition (expo), exhibits, booths, stands or demo areas will be ready, waiting and accessible? Fire Disrupts Flash Memory Summit Conference Exhibits This past week at the Flash Memory Summit (FMS) conference trade show event in Santa Clara California, what normally would be taken for granted (e.g. expo hall and exhibits) were disrupted. The disruption (more here and here) was caused by an early morning fire in one of the exhibitor’s booths (stand) in the expo hall (view some photos here via Toms Fortunately, nobody was hurt, at least physically, and damage (physically) appears to have been isolated. However while the key notes, panels, and other presentations did take place as part of the show must go on, the popular exhibit expo hall did not. Granted for some people who only attend conferences or seminar events for the presentation content, lack of the exhibition hall simply meant no free giveaways. On the other hand, for those who attend events like FMS mainly for the exhibition hall experience, the show did not go on, perhaps resulting in a trip in vain (e.g. how you might be able to recoup some travel costs in some scenarios) for some people. For example, those who were attending to meet with a particular vendor, see a product technology, conduct some business or other meetings, do an interview, video, podcast, take some photos, or simply get some free stuff were disrupted. Likewise those behind the scenes, from conference organizers, event staff not to mention the vendor’s sponsors who put resources (time, money, people, and equipment) into an exhibit were disrupted. Vendors were still able to issue their press releases and conduct their presentations, keynotes, panel discussions, however what about the lack of the expo. Do We Take IT Data Centers, Data Infrastructures and Event Infrastructures For Granted This begs the question of if trade show exhibits still have value, or can an event function without one? I am not sure as some events can and do stand on their merit with presentation content being the primary focus, others the expo is the draw, many are hybrid with a mix of both. A question and point of this piece is that how many people take conferences in general, and exhibits along with their associated Infrastructure for granted? How many know or understand the amount of time, money, people resources and various tradecraft skills across different disciplines go into event planning, staging, coordination, the execution, so they occur? This also ties into the theme of how many people only think and assume that IT data centers and clouds along with their data Infrastructure resources, services are available supporting applications along with data access to give information? The same holds true for your telephone (plain old telephone system [POTS] and cellular or mobile) service, gas, electric, sewer, water, waste (garbage), traditional or network based television, internet provide[...]

NVMe Wont Replace Flash By Itself They Compliment Each Other

Wed, 9 August 2017 17:16:15GMT

NVMe Wont Replace Flash By Itself They Complement Each Other Various Solid State Devices (SSD) including NVMe, SAS, SATA, USB, M.2 There has been some recent industry marketing buzz generated by a startup to get some attention by claiming via a study sponsored by including the startup that Non-Volatile Memory (NVM) Express (NVMe) will replace flash storage. Granted, many IT customers as well as vendors are still confused by NVMe thinking it is a storage medium as opposed to an interface used for accessing fast storage devices such as nand flash among other solid state devices (SSDs). Part of that confusion can be tied to common SSD based devices rely on NVM that are persistent memory retaining data when powered off (unlike the memory in your computer). Instead of saying NVMe will mean the demise of flash, what should or could be said however some might be scared to say it is that other interfaces and protocols such as SAS (Serial Attached SCSI), AHCI/SATA, mSATA, Fibre Channel SCSI Protocol aka FCP aka simply Fibre Channel (FC), iSCSI and others are what can be replaced by NVMe. NVMe is simply the path or roadway along with traffic rules for getting from point a (such as a server) to point b (some storage device or medium e.g. flash SSD). The storage medium is where data is stored such as magnetic for Hard Disk Drive (HDD) or tape, nand flash, 3D XPoint, Optane among others. NVMe and NVM including flash are better together The simple quick get to the point is that NVMe (e.g. Non Volatile Memory aka NVM Express [NVMe]) is an interface protocol (like SAS/SATA/iSCSI among others) used for communicating with various nonvolatile memory (NVM) and solid state device (SSDs). NVMe is how data gets moved between a computer or other system and the NVM persistent memory such as nand flash, 3D XPoint, Spintorque or other storage class memories (SCM). In other words, the only thing NVMe will, should, might or could kill off would be the use of some other interface such as SAS, SATA/AHCI, Fibre Channel, iSCSI along with propritary driver or protocols. On the other hand, given the extensibility of NVMe and how it can be used in different configurations including as part of fabrics, it is an enabler for various NVMs also known as persistent memories, SCMs, SSDs including those based on NAND flash as well as emerging 3D XPoint (or Intel version) among others. Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. If Answer is NVMe, then what are (or were) the questions? HDD topics and what to use for bulk and content storage NVMe related and flash SSD along with cloud, bulk, object storage topics Software Defined Data Infrastructure Essentials (CRC Press) that provides mover coverage of NVM and NVMe among other topics. and Get in the NVMe SSD game (if you are not already) with Intel 750 among others Server Storage I/O Benchmarking Performance Resource Tools What This All Means Context matters for example, NVM as the medium compared to NVMe as the interface and access protocols. With context in mind you can compare like or similar apples to apples such as nand flash, MRAM, NVRAM, 3D XPoint, Optane among other persistent memories also known as storage class memories, NVMs and SSDs. Likewise with context in mi[...]

Zombie Technology Life after Death Tape Is Still Alive

Mon, 24 July 2017 18:17:16GMT

Zombie Technology Life after Death Tape Is Still Alive A Zombie Technology is one declared dead yet has Life after Death such as Tape which is still alive. Image via (licensed for use from Tapes Evolving Role Sure we have heard for decade’s about the death of tape, and someday it will be dead and buried (I mean really dead), no longer used, buried, existing only in museums. Granted tape has been on the decline for some time, and even with many vendors exiting the marketplace, there remains continued development and demand within various data infrastructure environments, including software defined as well as legacy. Tape remains viable for some environments as part of an overall memory data storage hierarchy including as a portability (transportable) as well as bulk storage medium. Keep in mind that tapes role as a data storage medium also continues to change as does its location. The following table (via Software Defined Data Infrastructure Essentials (CRC Press)) Chapter 10 shows examples of various data movements from source to destination. These movements include migration, replication, clones, mirroring, and backup, copies, among others. The source device can be a block LUN, volume, partition, physical or virtual drive, HDD or SSD, as well as a file system, object, or blob container or bucket. An example of the modes in Table 10.1 include D2D backup from local to local (or remote) disk (HDD or SSD) storage or D2D2D copy from local to local storage, then to the remote. Mode - Description D2D - Data gets copied (moved, migrated, replicated, cloned, backed up) from source storage (HDD or SSD) to another device or disk (HDD or SSD)-based device D2C - Data gets copied from a source device to a cloud device. D2T - Data gets copied from a source device to a tape device (drive or library). D2D2D - Data gets copied from a source device to another device, and then to another device. D2D2T - Data gets copied from a source device to another device, then to tape. D2D2C   Data gets copied from a source device to another device, then to cloud. Data Movement Modes from Source to Destination Note that movement from source to the target can be a copy, rsync, backup, replicate, snapshot, clone, robocopy among many other actions. Also, note that in the earlier examples there are occurrences of tape existing in clouds (e.g. its place) and use changing.  Tip - In the past, “disk” usually referred to HDD. Today, however, it can also mean SSD. Think of D2D as not being just HDD to HDD, as it can also be SSD to SSD, Flash to Flash (F2F), or S2S among many other variations if you prefer (or needed). Image via For those still interested in tape, check out the Active Archive Alliance recent posts (here), as well as the 2017 Tape Storage Council Memo and State of their industry report (here). While lower end-tape such as LTO (which is not exactly the low-end it was a decade or so ago) continues to evolve, things may not be as persistent for tape at the high-end. With Oracle (via its Sun/StorageTek acquisition) exiting the high-end (e.g. Mainframe focused) tape business, that leaves mainly IBM as a technology provider. Image via With a single[...]

Who Will Be At Top Of Storage World Next Decade?

Sat, 22 July 2017 19:18:17GMT

Data storage regardless of if hardware, legacy, new, emerging, cloud service or various software defined storage (SDS) approaches are all fundamental resource components of data infrastructures along with compute server, I/O networking as well as management tools, techniques, processes and procedures. Fundamental Data Infrastructure resources Data infrastructures include legacy along with software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity). Data Infrastructure and other IT Layers (stacks and altitude levels) Various data infrastructures resource components spanning server, storage, I/O networks, tools along with hardware, software, services get defined as well as composed into solutions or services which may in turn be further aggregated into more extensive higher altitude offerings (e.g. further up the stack). Various IT and Data Infrastructure Stack Layers (Altitude Levels) Focus on Data Storage Present and Future Predictions Drew Robb (@Robbdrew) has a good piece over at Enterprise Storage Forum looking at the past, present and future of who will rule the data storage world that includes several perspective predictions comments from myself as well as others. Some of the perspectives and predictions by others are more generic and technology trend and buzzword bingo focus which should not be a surprise. For example including the usual performance, Cloud and Object Storage, DPDK, RDMA/RoCE, Software-Defined, NVM/Flash/SSD, CI/HCI, NVMe among others. Here are some excerpts from Drews piece along with my perspective and prediction comments of who may rule the data storage roost in a decade: Amazon Web Services (AWS) – AWS includes cloud and object storage in the form of S3. However, there is more to storage than object and S3 with AWS also having Elastic File Services (EFS), Elastic Block Storage (EBS), database, message queue and on-instance storage, among others. for traditional, emerging and storage for the Internet of Things (IoT). It is difficult to think of AWS not being a major player in a decade unless they totally screw up their execution in the future. Granted, some of their competitors might be working overtime putting pins and needles into Voodoo Dolls (perhaps bought via while wishing for the demise of Amazon Web Services, just saying. Voodoo Dolls and image via Of course, Amazon and AWS could follow the likes of Sears (e.g. some may remember their catalog) and ignore the future ending up on the where are they now list. While talking about Amazon and AWS, one will have to wonder where Wall Mart will end up in a decade with or without a cloud of their own? Microsoft – With Windows, Hyper-V and Azure (including Azure Stack), if there is any company in the industry outside of AWS or VMware that has quietly expanded its reach and positioning into storage, it is Microsoft, said Schulz. Microsoft IMHO has many offerings and capabilities across different dimensions as well as playing fields. There is the installed base of Windows Servers (and desktops) that have the ability to leverage Software Defined Storage including Storage Sp[...]

New family of Intel Xeon Scalable Processors enable software defined data infrastructures (SDDI) and SDDC

Tue, 11 July 2017 21:21:21GMT

Today Intel announced a new family of Xeon Scalable Processors (aka Purely) that for some workloads Intel claims to be on average of 1.65x faster than their predecessors. Note your real improvement will vary based on workload, configuration, benchmark testing, type of processor, memory, and many other server storage I/O performance considerations. Image via In general the new Intel Xeon Scalable Processors enable legacy and software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity). Some target application and environment workloads Intel is positioning these new processors for includes among others: Machine Learning (ML), Artificial Intelligence (AI), advanced analytics, deep learning and big data Networking including software defined network (SDN) and network function virtualization (NFV) Cloud and Virtualization including Azure Stack, Docker and Kubernetes containers, Hyper-V, KVM, OpenStack VMware vSphere, KVM among others High Performance Compute (HPC) and High Productivity Compute (e.g. the other HPC) Storage including legacy and emerging software defined storage software deployed as appliances, systems or server less deployment modes. Features of the new Intel Xeon Scalable Processors include: New core micro architecture with interconnects and on die memory controllers Sockets (processors) scalable up to 28 cores Improved networking performance using Quick Assist and Data Plane Development Kit (DPDK) Leverages Intel Quick Assist Technology for CPU offload of compute intensive functions including I/O networking, security, AI, ML, big data, analytics and storage functions. Functions that benefit from Quick Assist include cryptography, encryption, authentication, cipher operations, digital signatures, key exchange, loss less data compression and data footprint reduction along with data at rest encryption (DARE). Optane Non-Volatile Dual Inline Memory Module (NVDIMM) for storage class memory (SCM) also referred to by some as Persistent Memory (PM), not to be confused with Physical Machine (PM). Supports Advanced Vector Extensions 512  (AVX-512) for HPC and other workloads Optional Omni-Path Fabrics in addition to 1/10Gb Ethernet among other I/O options Six memory channels supporting up to 6TB of RDIMM with multi socket systems From two to eight  sockets per node (system) Systems support PCIe 3.x (some supporting x4 based M.2 interconnects) Note that exact speeds, feeds, slots and watts will vary by specific server model and vendor options. Also note that some server system solutions have two or more nodes (e.g. two or more real servers) in a single package not to be confused with two or more sockets per node (system or motherboard). Refer to the where to learn more section below for links to Intel benchmarks and other resources. What About Speeds and Feeds Watch for and check out the various Intel partners who have or will be announcing their new server compute platforms based on Intel Xeon Scalable Processors. Each of the different vendors wil[...]

GDPR (General Data Protection Regulation) Resources Are You Ready?

Tue, 20 June 2017 17:11:17GMT

The new European General Data Protection Regulation (GDPR) go into effect in a year on May 25 2018 are you ready? What Is GDPR If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought. While it is possible that some organizations may not be affected by GDPR in Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that tended to focus on specific industries such as healthcare (HIPPA and HITECH) or financial (SARBOX, Dodd/Frank among others), these new regulations can be more far-reaching. Where To Learn More GDPR goes into effect May 25 2018 Are You Ready? GDPR Compliance Planning for Microsoft Environments (Webinar) Quest GDPR Resources Quest GDPR Compliance resources Microsoft and Azure Cloud GDPR Resources How Microsoft Azure Can Help Organizations Become Compliant with the EU GDPR (Microsoft PDF White Paper) GDPR Questions? Azure has answers (Azure Microsoft) ) Microsoft and GDPR Trustcenter Resources (Microsoft) Get GDPR compliant with the Microsoft Cloud (Blogs Microsoft) Earning your trust with contractual commitments to the General Data Protection Regulation (Blogs Microsoft) Microsoft enterprise products and services and the GDPR (Microsoft) Where to Start with GDPR and Microsoft (Microsoft) Microsoft related GDPR FAQ (Microsoft) Do you have or know of relevant GDPR information and resources? Feel free to add them via comments or send us an email, however please watch the spam and sales pitches as they will be moderated. What This All Means Now is the time to start planning, preparing for GDPR if you have not done so and need to, as well as becoming more generally aware of it and other initiatives. One of the key takeaways is that while the word compliance is involved, there is much more to GDPR than just compliance as we have seen in the part. With GDPR and other initiatives data protection becomes the focus including privacy, protect, preserve, secure, serve as well as manage, have insight, awareness along with associated reporting. Ok, nuff said (for now...). Cheers Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press). All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2017 Server StorageIO(R) and UnlimitedIO All Rights Reserved [...]

Microsoft Windows Server, Azure, Nano Life cycle Updates

Thu, 15 June 2017 18:17:16GMT

Microsoft Windows Server, Azure, Nano and life cycle Updates For those of you who have an interest in Microsoft Windows Server on-premise, on Azure, on Hyper-V or Nano life cycle here are some recently announced updates. Microsoft has announced updates to Windows Server Core and Nano along with semi-annual channel updates (read more here). The synopsis of this new update via Microsoft (read more here) is: In this new model, Windows Server releases are identified by the year and month of release: for example, in 2017, a release in the 9th month (September) would be identified as version 1709. Windows Server will release semi-annually in fall and spring. Another release in March 2018 would be version 1803. The support lifecycle for each release is 18 months. Microsoft has announced that its lightweight variant of WIndows Server 2016 (if you need a refresh on server requirements visit here) known as nano will now be focused for WIndows based containers as opposed to being for bare metal. As part of this change, Microsoft has reiterated that Server Core the headless (aka non-desktop user interface) version of WIndows Server 2016 will continue as the platform for BM along with other deployments where a GUI interface is not needed. Note that one of the original premises of Nano was that it could be leveraged as a replacement for Server Core. As part of this shift, Microsoft has also stated their intention to further streamline the already slimmed down version of WIndows Server known as Nano by reducing its size another 50%. Keep in mind that Nano is already a fraction of the footprint size of regular Windows Server (Core or Desktop UI). The footprint of Nano includes both its capacity size on disk (HDD or SSD), as well as its memory requirements, speed of startup boot, along with number of components that cut the number of updates. By focusing Nano for container use (e.g. Windows containers) Microsoft is providing multiple micro services engines (e.g. Linux and Windows) along with various management including Docker. Similar to providing multiple container engines (e.g. Linux and Windows) Microsoft is also supporting management from Windows along with Unix. Does This Confirm Rumor FUD that Nano is Dead IMHO the answer to the FUD rumors that are circulating around that NANO is dead are false. Granted Nano is being refocused by Microsoft for containers and will not be the lightweight headless Windows Server 2016 replacement for Server Core. Instead, the Microsoft focus is two path with continued enhancements on Server Core for headless full Windows Server 2016 deployment, while Nano gets further streamlined for containers. This means that Nano is no longer bare metal or Hyper-V focused with Microsoft indicating that Server Core should be used for those types of deployments. What is clear (besides no bare metal) is that Microsoft is working to slim down Nano even further by removing bare metal items, Powershell,.Net and other items instead of making those into optional items. The goal of Microsoft is to make the base Nano image on disk (or via pull) as small as possible with the initial goal of being 50% of its current uncom[...]

AWS S3 Storage Gateway Revisited (Part I)

Wed, 14 June 2017 23:13:03GMT

AWS S3 Storage Gateway Revisited (how to avoid an install error) This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it's called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use. If you need an AWS primer and overview of various services such as Elastic Cloud Compute (EC2), Elastic Block Storage (EBS), Elastic File Service (EFS), Simple Storage Service (S3), Availability Zones (AZ), Regions and other items check this multi-part series (Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) ). As a quick refresher, S3 is the AWS bulk, high-capacity unstructured and object storage service along with its companion deep cold (e.g. inactive) Glacier. There are various S3 storage service classes including standard, reduced redundancy storage (RRS) along with infrequent access (IA) that have different availability durability, performance, service level and cost attributes. Note that S3 IA is not Glacier as your data always remains on-line accessible while Glacier data can be off-line. AWS S3 can be accessed via its API, as well as via HTTP rest calls, AWS tools along with those from third-party's. Third party tools include NAS file access such as S3FS for Linux that I use for my Ubuntu systems to mount S3 buckets and use similar to other mount points. Other tools include Cloudberry, S3 Motion, S3 Browser as well as plug-ins available in most data protection (backup, snapshot, archive) software tools and storage systems today. AWS S3 Storage Gateway and What's New The Storage Gateway is the AWS tool that you can use for accessing S3 buckets and objects via your block volume, NAS file or tape based applications. The Storage Gateway is intended to give S3 bucket and object access to on-premise applications and data infrastructures functions including data protection (backup/restore, business continuance (BC), business resiliency (BR), disaster recovery (DR) and archiving), along with storage tiering to cloud. Some of the things that have evolved with the S3 Storage Gateway include: Easier, streamlined download, installation, deployment Enhanced Virtual Tape Library (VTL) and Virtual Tape support File serving and sharing (not to be confused with Elastic File Services (EFS)) Ability to define your own bucket and associated parameters Bucket options including Infrequent Access (IA) or standard Options for AWS EC2 hosted, or on-premise VMware as well as Hyper-V gateways (file only supports VMware and EC2) AWS Storage Gateway Three Functions AWS Storage Gateway can be deployed for three basic functions: AWS Storage Gateway File Architecture via File Gateway (NFS NAS) - Files, folders, objects and other items are stored in AWS S3 with a local cache for low latency access to most recently used data. With this option, you can create folders and subdirector[...]

Part II Revisting AWS S3 Storage Gateway (Test Drive Deployment)

Wed, 14 June 2017 23:13:03GMT

Part II Revisiting AWS S3 Storage Gateway (Test Drive Deployment) This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it's called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use. What About Storage Gateway Costs? Costs vary by region, type of storage being used (files stored in S3, Volume Storage, EBS Snapshots, Virtual Tape storage, Virtual Tape storage archive), as well as type of gateway host, along with how access and used. Request pricing varies including data written to AWS storage by gateway (up to maximum of $125.00 per month), snapshot/volume delete, virtual tape delete, (prorate fee for deletes within 90 days of being archived), virtual tape archival, virtual tape retrieval. Note that there are also various data transfer fees that also vary by region and gateway host. Learn more about pricing here. What Are Some Storage Gateway Alternatives AWS and S3 storage gateway access alternatives include those from various third-party (including that are in the AWS marketplace), as well as via data protection tools (e.g. backup/restore, archive, snapshot, replication) and more commonly storage systems. Some tools include Cloudberry, S3FS, S3 motion, S3 Browser among many others. Tip is when a vendor says they support S3, ask them if that is for their back-end (e.g. they can access and store data in S3), or front-end (e.g. they can be accessed by applications that speak S3 API). Also explore what format the application, tool or storage system stores data in AWS storage, for example, are files mapped one to one to S3 objects along with corresponding directory hierarchy, or are they stored in a save set or other entity. AWS Storage Gateway Deployment and Management Tips Once you have created your AWS account (if you did not already have one) and logging into the AWS console (note the link defaults to US East 1 Region), go to the AWS Services Dashboard and select Storage Gateway (or click here which goes to US East 1). You will be presented with three options (File, Volume or VTL) modes. What Does Storage Gateway and Install Look Like The following is what installing a AWS Storage Gateway for file and then volume looks like. First, access the AWS Storage Gateway main landing page (it might change by time you read this) to get started. Scroll down and click on the Get Started with AWS Storage Gateway button or click here. Select type of gateway to create, in the following example File is chosen. Next select the type of file gateway host (EC2 cloud hosted, or on-premise VMware). If you choose VMware, an OVA will be downloaded (follow the onscreen instructions) that you deploy on your ESXi system or with vCenter. Note that there is a different VMware VM gateway OAV for File Gateway and another for Volume Gateway. In the following example VMware ESXi OVA is selected and down[...]

Top vBlog 2017 Voting Now Open

Wed, 7 June 2017 19:18:17GMT

Top vBlog 2017 Voting Now Open It is that time of the year again when Eric Siebert (@ericsiebert) over at vSphere-land holds his annual Top vBlog (e.g. VMware and Virtualization related) voting (vote here until June 30, 2017). The annual Top vBlog event enables fans to vote for their favorite blogs (to get them into the top 10, 25, 50 and 100) as well as rank them for different categories which appear on Eric's vLaunchPad site. This years Top vBlog voting is sponsored by TurboNomic (e.g. formerly known as VMturbo) who if you are not aware of, have some interesting technology for cross-platform (cloud, container, virtualization, hardware, software, services) data infrastructure management software tools. The blogs and sites listed on Eric's site have common theme linkage to Virtualization and in particular tend to be more VMware focused, however some are also hybrid agnostic spanning other technologies, vendors, services and tools. Some examples of the different focus areas include hypervisors, VDI, cloud, containers, management tools, scripting, networking, servers, storage, data protection including backup/restore, replication, BC, DR among others). In addition to the main list of blogs (that are active), there are also sub lists for different categories including: Top 100 (Also top 10, 25, 50) vBlogs Archive of retired (e.g. not active or seldom post) News and Information sites Podcasts Scripting Blogs Storage related Various Virtualization Blogs VMware Corporate Blogs What To Do Get out and vote for your favorite (or blogs that you frequent) in appreciation to those who create virtualization, VMware and data infrastructure related content. Click here or on the image above to reach the voting survey site where you will find more information and rules. In summary, select 12 of your favorite or preferred blogs, then rank them from 1 (most favorite) to 12. Then select your favorites for other categories such as Female Blog, Independent, New Blog, News websites, Podcast, Scripting and Storage among others. Note: You will find my StorageIOblog in the main category (e.g. where you select 12 and then rank), as well as in the Storage, Independent, as well as Podcast categories, and thank you in advance for your continued support. Which Blogs Do I Recommend (Among Others) Two of my favorite blogs (and authors) are not included as Duncan Epping (Yellow Bricks) former #1 and Frank Denneman former #4 chose not to take part this year opening the door for some others to move up into the top 10 (or 25, 50 and 100). Of those listed some of my blogs I find valuable include Cormac Hogan of VMware, Demitasse (Alastair Cooke), ESX Virtualization (Vladan Seget), Kendrick Coleman, (Eric Sloof), Planet VM (Tom Howarth), Virtually Ghetto (William Lam), VM Blog (David Marshall), (Eric Siebert) and Wahl Networks (Chris Wahl) among others. Where to learn more vSphere-land Turbonomic 2017 Top vBlog Voting What this all means It's that time of the year again to take a few moments and show some appreciation for you[...]