Subscribe: Storageioblog summary RSS feed (http://storageioblog.com/RSS.xml)
http://storageioblog.com/RSS.xml
Preview: Storageioblog summary RSS feed (http://storageioblog.com/RSS.xml)

StorageIOblog RSS feed (https://storageioblog.com/RSSfull.xml)



Gregs Server and StorageIO blog: IT information and data infrastructure cloud virtualization software defined storage I/O



Published: Fri, 6 Apr 2018 17:16:15 -0500

Last Build Date: Fri, 6 Apr 2018 17:16:15 -0500

Copyright: (C) Copyright 2006-2018 All rights reserved. Server StorageIO and UnlimitedIO LLC (www.storageio.com)
 




Fri, 6 Apr 2018 17:16:15 -0500

Have you heard about the new CLOUD Act data regulation? Have you heard about the new CLOUD Act data regulation? The new CLOUD Act data regulation became law as part of the recent $1.3 Trillion (USD) omnibus U.S. government budget spending bill passed by Congress on March 23, 2018 and signed by President of the U.S. (POTUS) Donald Trump in March. CLOUD Act is the acronym for Clarifying Lawful Overseas Use of Data, not to be confused with initiatives such as U.S. federal governments CLOUD First among others which are focused on using cloud, securing and complying (e.g. FedRAMP among others). In other words, the new CLOUD Act data regulation pertains to how data stored by cloud or other service providers can be accessed by law environment officials (LEO). Supreme Court of the U.S. (SCOTUS) Image via https://www.supremecourt.gov/ CLOUD Act background and Stored Communications Act After the signing into law of CLOUD Act, the US Department of Justice (DOJ) has asked the Supreme Court of the U.S. (SCOTUS) to dismiss the pending case against Microsoft (e.g., Azure Cloud). The case or question in front of SCOTUS pertained to whether LEO can search as well as seize information or data that is stored overseas or in foreign counties. As a refresher, or if you had not heard, SCOTUS was asked to resolve if a service provider who is responding to a warrant based on probable cause under the 1986 era Stored Communications Act, is required to provide data in its custody, control or possession, regardless of if stored inside, or, outside the US. Microsoft Azure Regions via Microsoft.com This particular case in front of SCOTUS centered on whether Microsoft (a U.S. Technology firm) had to comply with a court order to produce emails (as part of an LEO drug investigation) even if those were stored outside of the US. In this particular situation, the emails were alleged to have been stored in a Microsoft Azure Cloud Dublin Ireland data center. For its part, Microsoft senior attorney Hasan Ali said via FCW “This bill is a significant step forward in the larger global debate on what our privacy laws should look like, even if it does not go to the highest threshold". Here are some additional perspectives via Microsoft Brad Smith on his blog along with a video. What is CLOUD Act Clarifying Lawful Overseas Use of Data is the new CLOUD Act data regulation approved by Congress (House and Senate) details can be read here and here respectively with additional perspectives here. The new CLOUD Act law allows for POTUS to enter into executive agreements with foreign governments about data on criminal suspects. Granted what is or is not a crime in a given country will likely open Pandora’s box of issues. For example, in the case of Microsoft, if an agreement between the U.S. and Ireland were in place, and, Ireland agreed to release the data, it could then be accessed. Now, for some who might be hyperventilating after reading the last sentence, keep this in mind that if you are overseas, it is up to your government to protect your privacy. The foreign government must have an agreement in place with the U.S. and that a crime has or had been committed, a crime that both parties concur with. Also, keep in mind that is also appeal processes for providers including that the customer is not a U.S. person and does not reside in the U.S. and the disclosure would put the provider at risk of violating foreign law. Also, keep in mind that various provisions must be met before a cloud or service provider has to hand over your data regardless of what country you reside, or where the data resides. Where to learn more Learn more about CLOUD Act, cloud, data protection, world backup day, recovery, restoration, GDPR along with related data infrastructure topics for cloud, legacy and other software defined environments via the following links: AWS Cloud Application Data Protection Webinar U.S. House and Senate versions of CLOUD Act data regulations CLOUD (Clarifying Lawful Overseas Use of Data) Act data regulati[...]



Data Protection Recovery Life Post World Backup Day Pre GDPR

Mon, 2 Apr 2018 21:20:19 -0500

Data Protection Recovery Life Post World Backup Day Pre GDPR It's time for Data Protection Recovery Life Post World Backup Day Pre GDPR Start Date. The annual March 31 world backup day focus has come and gone once again. However, that does not mean data protection including backup as well as recovery along with security gets a 364-day vacation until March 31, 2019 (or the days leading up to it). Granted, for some environments, public relations, editors, influencers and other industry folks backup day will take some time off while others jump on the ramp up to GDPR which goes into effect May 25, 2018. Expanding Focus Data Protection and GDPR As I mentioned in this post here, world backup day should be expanded to include increased focus not just on backup, also recovery as well as other forms of data protection. Likewise, May 25 2018 is not the deadline or finish line or the destination for GDPR (e.g. Global Data Protection Regulations), rather, it is the starting point for an evolving journey, one that has global impact as well as applicability. Recently I participated in a fireside chat discussion with Danny Allan of Veeam who shared his GDPR expertise as well as experiences, lessons learned, tips of Veeam as they started their journey, check it out here. Expanding Focus Data Protection Recovery and other Things that start with R As part of expanding the focus on Data Protection Recovery Life Post World Backup Day Pre GDPR, that also means looking at, discussing things that start with R (like Recovery). Some examples besides recovery include restoration, reassess, review, rethink protection, recovery point, RPO, RTO, reconstruction, resiliency, ransomware, RAID, repair, remediation, restart, resume, rollback, and regulations among others. Data Protection Tips, Reminders and Recommendations There are no blue participation ribbons for failed recovery. However, there can be pink slips. Only you can prevent on-premise or cloud data loss. However, it is also a shared responsibility with vendors and service providers You can’t go forward in the future when there is a disaster or loss of data if you can’t go back in time for recovery GDPR appliances to organizations around the world of all size and across all sectors including nonprofit Keep new school 4 3 2 1 data protection in mind while evolving from old school 3 2 1 backup rules A Fundamental premise of data infrastructures is to enable applications and their data, protect, preserve, secure and serve Remember to protect your applications, as well as data including metadata, settings configurations Test your restores including can you use the data along with security settings Don’t cause a disaster in the course of testing your data protection, backups or recovery Expand (or refresh) your data protection and data infrastructure education tradecraft skills experiences Where to learn more Learn more about data protection, world backup day, recovery, restoration, GDPR along with related data infrastructure topics for cloud, legacy and other software defined environments via the following links: AWS Cloud Application Data Protection Webinar March 2018 Server StorageIO Data Infrastructure Update Newsletter Application Data Value Characteristics Everything Is Not The Same (five-part mini-series) Application Data Availability 4 3 2 1 Data Protection (part of the mini-series) World Backup Day: Best Practices for a Hybrid Approach Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips World Backup Day 2018 Data Protection Readiness Reminder Data Protection Fundamental Topics Tools Techniques Technologies Tips Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA,Replication, Security) Veeam GDPR preparedness experiences Webinar walking the talk Data Infrastructure Server Storage I/O related Tradecraft Overview Data Infrastructure Overview, Its What’s Inside of Data Centers 4 3 2 1 and 3 2 1 data protection best practi[...]



AWS Cloud Application Data Protection Webinar

Thu, 29 Mar 2018 11:10:09 -0500

AWS Cloud Application Data Protection Webinar AWS Cloud Application Data Protection Webinar Date: Tuesday, April 24, 2018 at 11:00am PT / 2:00pm ET Only YOU can prevent data loss for on-premise, Amazon Web Service (AWS) based cloud, and hybrid applications. Join me in this free AWS Cloud Application Data Protection Webinar (registration required) sponsored by Veeam produced by Redmond Magazine as we explore issues, trends, tools, best practices and techniques for enabling data protection with AWS technologies. Attend and learn about: Application-aware point in time snapshot data protection Protecting AWS EC2 and on-premise applications (and data) Leveraging AWS for data protection and recovery And much more Register for the live event or catch the replay here. Where to learn more Learn more about data protection, software defined data center (SDDC), software defined data infrastructures (SDDI), AWS, cloud and related topics via the following links: World Backup Day 2018 Data Protection Readiness Reminder Data Infrastructure Server Storage I/O related Tradecraft Overview Data Infrastructure Overview, Its What's Inside of Data Centers 4 3 2 1 and 3 2 1 data protection best practices Only you can prevent cloud data loss Cloud conversations: confidence, certainty and confidentiality Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) Cloud Conversations AWS Azure Service Maps via Microsoft AWS S3 Storage Gateway Revisited (Part I) Cloud Conversations: AWS S3 Cross Region Replication storage enhancements Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3) AWS Announces New S3 Cloud Storage Security Encryption Features All You Need To Know about ROBO Data Protection Backup (free webinar with registration) Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics) The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources) Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security) Various Data Infrastructure related events, webinars and other activities Server StorageIO.tv (various videos and podcasts, fun and for work) Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book. What this all means and wrap-up You can not go forward if you can not go back to a particular point in time (e.g. recovery point objective or RPO). Likewise, if you can not go back to a given RPO, how can you go forward with your business as well as meet your recovery time objective (RTO)? Join us for the live conversation or replay by registering (free) here to learn how to enable AWS Cloud Application Data Protection Webinar, as well as using AWS S3 for on-site, on-premise data protection. Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden. All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2018 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO. [...]



Microsoft Windows Server 2019 Insiders Preview

Wed, 28 Mar 2018 23:22:21 -0600

Microsoft Windows Server 2019 Insiders Preview Microsoft Windows Server 2019 Insiders Preview has been announced. Windows Server 2019 in the past might have been named 2016 R2 also known as a Long-Term Servicing Channel (LTSC) release. Microsoft recommends LTSC Windows Server for workloads such as Microsoft SQL Server, Share Point and SDDC. The focus of Microsoft Windows Server 2019 Insiders Preview is around hybrid cloud, security, application development as well as deployment including containers, software defined data center (SDDC) and software defined data infrastructure, as well as converged along with hyper-converged infrasture (HCI) management. Windows Server 2019 Preview Features Features and enhancements in the Microsoft Windows Server 2019 Insiders Preview span HCI management, security, hybrid cloud among others. Hybrid cloud - Extending active directory, file server synchronize, cloud backup, applications spanning on-premise and cloud, management). Security - Protect, detect and respond including shielded VMs, attested guarded fabric of host guarded machines, Windows and Linux VM (shielded), VMConnect for Windows and Linux troubleshooting of Shielded VM and encrypted networks, Windows Defender Advanced Threat Protection (ATP) among other enhancements. Application platform - Developer and deployment tools for Windows Server containers and Windows Subsystem on Linux (WSL). Note that Microsoft has also been reducing the size of the Server image while extending feature functionality. The smaller images take up less storage space, plus load faster. As part of continued serverless and container support (Windows and Linux along with Docker), there are options for deployment orchestration including Kubernetes (in beta). Other enhancements include extending previous support for Windows Subsystem for Linux (WSL). Other enhancements part of Microsoft Windows Server 2019 Insiders Preview include cluster sets in support of software defined data center (SDDC). Cluster sets expand SDDC clusters of loosely coupled grouping of multiple failover clusters including compute, storage as well as hyper-converged configurations. Virtual machines have fluidity across member clusters within a cluster set and unified storage namespace. Existing failover cluster management experiences is preserved for member clusters, along with a new cluster set instance of the aggregate resources. Management enhancements include S2D software defined storage performance history, project Honolulu support for storage updates, along with powershell cmdlet updates, as well as system center 2019. Learn more about project Honolulu hybrid management here and here. Microsoft and Windows LTSC and SAC As a refresher, Microsoft Windows (along with other software) is now being released on two paths including more frequent semi-annual channel (SAC), and less frequent LTSC releases. Some other things to keep in mind that SAC are focused around server core and nano server as container image while LTSC includes server with desktop experience as well as server core. For example, Windows Server 2016 released fall of 2016 is an LTSC, while the 1709 release was a SAC which had specific enhancements for container related environments. There was some confusion fall of 2017 when 1709 was released as it was optimized for container and serverless environments and thus lacked storage spaces direct (S2D) leading some to speculate S2D was dead. S2D among other items that were not in the 1709 SAC are very much alive and enhanced in the LTSC preview for Windows Server 2019. Learn more about Microsoft LTSC and SAC here. Test Driving Installing The Bits One of the enhancements with LTSC preview candidate server 2019 is improved upgrades of existing environments. Granted not everybody will choose the upgrade in place keeping existing files however some may find the capability useful. I chose to give the upgrade keeping current files in place as an option to see how it [...]



March 2018 Server StorageIO Data Infrastructure Update Newsletter

Tue, 20 Mar 2018 13:14:15 -0600

March 2018 Server StorageIO Data Infrastructure Update Newsletter Volume 18, Issue 3 (March 2018) Hello and welcome to the March 2018 Server StorageIO Data Infrastructure Update Newsletter. If you are wondering where the January and February 2018 update newsletters are, they are rolled into this combined edition. In addition to the short email version (free signup here), you can access full versions (html here and PDF here) along with previous editions here. In this issue: Data Infrastructure Industry Activity News Commentary and Tips Server StorageIOblog posts Recommended Reading Various Events and Webinars Industry Resources and Links Enjoy this edition of the Server StorageIO Data Infrastructure update newsletter. Cheers GS Data Infrastructure and IT Industry Activity Trends World Backup day is coming up on March 31 which is a good time to remember to verify and validate that your data protection is working as intended. On one hand I think it is a good idea to call out the importance of making sure your data is protected including backed up. On the other hand data protection is not a once a year, rather a year around, 7 x 24 x 365 day focus. Also the focus needs to be on more than just backup, rather, all aspects of data protection from archiving to business continuance (BC), business resiliency (BR), disaster recovery (DR), always on, always accessible, along with security and recovery. Data Infrastructure 4 3 2 1 Data Protection and Backup Some data spring thoughts, perspectives and reminders. Data lakes may swell beyond their banks causing rivers of data to flood as they flow into larger reservoirs, great data lakes, gulfs of data, seas and oceans of data. Granted, some of that data will be inactive cold parked like glaciers while others semi-active floating around like icebergs. Hopefully your data is stored on durable storage solutions or services and does not melt. Application Data Availability 4 3 2 1 Data Protection Veeam GDPR preparedness experiences Webinar walking the talk Benefits of Moving Hyper-V Disaster Recovery to the Cloud Webinar World Backup Day 2018 Data Protection Readiness Reminder Part 1 – Data Infrastructure Data Protection Fundamentals Data Protection Diaries series Various NAND Flash SSD devices and SAS, SATA, NVMe, M.2 interfaces Non-Volatile Memory (NVM) including various solid state device (SSD) mediums (e.g. nand flash, 3D XPoint, MRAM among others), packaging (drives, PCIe Add in cars [AiC] along with entire systems, appliances or arrays). Also part of the continue evolution of NVM, SSD and other persistent memories (PM) including storage class memories (SCM) are different access protocol interfaces. Keep in mind that there is a difference between NVM (medium) and NVMe (access), NVM is the generic category of mediums or media and devices such as nand flash, nvram, 3D XPoint among others SCM (and PMs). In other words, NVM is what data devices use for storing data, NVMe is how devices and systems are accessed. NVMe and its variations is how NVM, SSD, PM, SCM media and devices get accessed locally, as well as over network fabrics (e.g. NVMe-oF an FC-NVMe). NVMe continues to evolve including with networked fabric variations such as RDMA based NVMe over Fabric (NVMe-oF), along with Fibre Channel based (FC-NVMe). The Fibre Channel Industry Association trade group recently held its second multi-vendor plugfest in support of NVMe over Fibre Channel. Read more about NVM, NVMe, SSD, SCM, flash and related technologies, tools, trends, tips via the following resources: VMware continues cloud construction with March announcements Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot PCIe Server Storage I/O Network Fundamentals #blogtober If Answer is NVMe, what are the questions? NVMe Wont Replace Flash By Itself They Complement Each Other www.thessdplace.com and www.thenvmeplace.com [...]



Application Data Value Characteristics Everything Is Not The Same (Part I)

Tue, 13 Mar 2018 17:16:15 -0600

Application Data Value Characteristics Everything Is Not The Same Application Data Value Characteristics Everything Is Not The Same This is part one of a five-part mini-series looking at Application Data Value Characteristics Everything Is Not The Same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we start things off by looking at general application server storage I/O characteristics that have an impact on data value as well as access. Everything is not the same across different organizations including Information Technology (IT) data centers, data infrastructures along with the applications as well as data they support. For example, there is so-called big data that can be many small files, objects, blobs or data and bit streams representing telemetry, click stream analytics, logs among other information. Keep in mind that applications impact how data is accessed, used, processed, moved and stored. What this means is that a focus on data value, access patterns, along with other related topics need to also consider application performance, availability, capacity, economic (PACE) attributes. If everything is not the same, why is so much data along with many applications treated the same from a PACE perspective? Data Infrastructure resources including servers, storage, networks might be cheap or inexpensive, however, there is a cost to managing them along with data. Managing includes data protection (backup, restore, BC, DR, HA, security) along with other activities. Likewise, there is a cost to the software along with cloud services among others. By understanding how applications use and interact with data, smarter, more informed data management decisions can be made. IT Applications and Data Infrastructure Layers Keep in mind that everything is not the same across various organizations, data centers, data infrastructures, data and the applications that use them. Also keep in mind that programs (e.g. applications) = algorithms (code) + data structures (how data defined and organized, structured or unstructured). There are traditional applications, along with those tied to Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML), Big Data and other analytics including real-time click stream, media and entertainment, security and surveillance, log and telemetry processing among many others. What this means is that there are many different application with various character attributes along with resource (server compute, I/O network and memory, storage requirements) along with service requirements. Common Applications Characteristics Different applications will have various attributes, in general, as well as how they are used, for example, database transaction activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. Performance, availability, capacity, and economics (PACE) describes the applications and data characters and needs shown in the following figure. Application PACE attributes (via Software Defined Data Infrastructure Essentials) All applications have PACE attributes, however: PACE attributes vary by application and usage Some applications and their data are more active than others PACE characteristics may vary within different parts of an application Think of applications along with associated data PACE as its personality or how it behaves, what it does, how it does it, and when, along with value, benefit, or cost as well as quality-of-service (QoS) attributes. Understanding applications in different environments, including data values and associated PACE attributes, is essential for making [...]



Application Data Availability 4 3 2 1 Data Protection

Tue, 13 Mar 2018 17:16:15 -0600

Application Data Availability 4 3 2 1 Data Protection Application Data Availability 4 3 2 1 Data Protection This is part two of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application performance, availability, capacity, economic (PACE) attributes that have an impact on data value as well as availability. Availability (Accessibility, Durability, Consistency) Just as there are many different aspects and focus areas for performance, there are also several facets to availability. Note that applications performance requires availability and availability relies on some level of performance. Availability is a broad and encompassing area that includes data protection to protect, preserve, and serve (backup/restore, archive, BC, BR, DR, HA) data and applications. There are logical and physical aspects of availability including data protection as well as security including key management (manage your keys or authentication and certificates) and permissions, among other things. Availability = accessibility (can you get to your application and data) + durability (is the data intact and consistent). This includes basic Reliability, Availability, Serviceability (RAS), as well as high availability, accessibility, and durability. “Durable” has multiple meanings, so context is important. Durable means how data infrastructure resources hold up to, survive, and tolerate wear and tear from use (i.e., endurance), for example, Flash SSD or mechanical devices such as Hard Disk Drives (HDDs). Another context for durable refers to data, meaning how many copies in various places. Server, storage, and I/O network availability topics include: Resiliency and self-healing to tolerate failure or disruption Hardware, software, and services configured for resiliency Accessibility to reach or be reached for handling work Durability and consistency of data to be available for access Protection of data, applications, and assets including security Additional server I/O and data infrastructure along with storage topics include: Backup/restore, replication, snapshots, sync, and copies Basic Reliability, Availability, Serviceability, HA, fail over, BC, BR, and DR Alternative paths, redundant components, and associated software Applications that are fault-tolerant, resilient, and self-healing Non disruptive upgrades, code (application or software) loads, and activation Immediate data consistency and integrity vs. eventual consistency Virus, malware, and other data corruption or loss prevention From a data protection standpoint, the fundamental rule or guideline is 4 3 2 1, which means having at least four copies consisting of at least three versions (different points in time), at least two of which are on different systems or storage devices and at least one of those is off-site (on-line, off-line, cloud, or other). There are many variations of the 4 3 2 1 rule shown in the following figure along with approaches on how to manage technology to use. We will go into deeper this subject in later chapters. For now, remember the following. 4 3 2 1 data protection (via Software Defined Data Infrastructure Essentials) 1    At least four copies of data (or more), Enables durability in case a copy goes bad, deleted, corrupted, failed device, or site. 2    The number (or more) versions of the data to retain, Enables various recovery points in time to restore, resume, restart from. 3    Data located o[...]



Application Data Characteristics Types Everything Is Not The Same

Tue, 13 Mar 2018 17:16:15 -0600

Application Data Characteristics Types Everything Is Not The Same Application Data Characteristics Types Everything Is Not The Same This is part three of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on different types of data. There is more to data than simply being big data, fast data, big fast or unstructured, structured or semistructured, some of which has been touched on in this series, with more to follow. Note that there is also data in terms of the programs, applications, code, rules, policies as well as configuration settings, metadata along with other items stored. Various Types of Data Data types along with characteristics include big data, little data, fast data, and old as well as new data with a different value, life-cycle, volume and velocity. There are data in files and objects that are big representing images, figures, text, binary, structured or unstructured that are software defined by the applications that create, modify and use them. There are many different types of data and applications to meet various business, organization, or functional needs. Keep in mind that applications are based on programs which consist of algorithms and data structures that define the data, how to use it, as well as how and when to store it. Those data structures define data that will get transformed into information by programs while also being stored in memory and on data stored in various formats. Just as various applications have different algorithms, they also have different types of data. Even though everything is not the same in all environments, or even how the same applications get used across various organizations, there are some similarities. Even though there are different types of applications and data, there are also some similarities and general characteristics. Keep in mind that information is the result of programs (applications and their algorithms) that process data into something useful or of value. Data typically has a basic life cycle of: Creation and some activity, including being protected Dormant, followed by either continued activity or going inactive Disposition (delete or remove) In general, data can be Temporary, ephemeral or transient Dynamic or changing (“hot data”) Active static on-line, near-line, or off-line (“warm-data”) In-active static on-line or off-line (“cold data”) Data is organized Structured Semi-structured Unstructured General data characteristics include: Value = From no value to unknown to some or high value Volume = Amount of data, files, objects of a given size Variety = Various types of data (small, big, fast, structured, unstructured) Velocity = Data streams, flows, rates, load, process, access, active or static The following figure shows how different data has various values over time. Data that has no value today or in the future can be deleted, while data with unknown value can be retained. Different data with various values over time Data Value Known, Unknown and No Value General characteristics include the value of the data which in turn determines its performance, availability, capacity, and economic considerations. Also, data can be ephemeral (temporary) or kept for longer periods of time on persistent, non-volatile storage (you do not lose the data when power is turned off). Examples of temporary scratch include work and scratch areas such as where[...]



Application Data Volume Velocity Variety Everything Is Not The Same

Tue, 13 Mar 2018 17:16:15 -0600

Application Data Volume Velocity Variety Everything Not The Same Application Data Volume Velocity Variety Everything Not The Same This is part four of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we continue looking at application and data characteristics with a focus on data volume velocity and variety, after all, everything is not the same, not to mention many different aspects of big data as well as little data. Volume of Data More data is growing at a faster rate every day, and that data is being retained for longer periods. Some data being retained has known value, while a growing amount of data has an unknown value. Data is generated or created from many sources, including mobile devices, social networks, web-connected systems or machines, and sensors including IoT and IoD. Besides where data is created from, there are also many consumers of data (applications) that range from legacy to mobile, cloud, IoT among others. Unknown-value data may eventually have value in the future when somebody realizes that he can do something with it, or a technology tool or application becomes available to transform the data with unknown value into valuable information. Some data gets retained in its native or raw form, while other data get processed by application program algorithms into summary data, or is curated and aggregated with other data to be transformed into new useful data. The figure below shows, from left to right and front to back, more data being created, and that data also getting larger over time. For example, on the left are two data items, objects, files, or blocks representing some information. In the center of the following figure are more columns and rows of data, with each of those data items also becoming larger. Moving farther to the right, there are yet more data items stacked up higher, as well as across and farther back, with those items also being larger. The following figure can represent blocks of storage, files in a file system, rows, and columns in a database or key-value repository, or objects in a cloud or object storage system. Increasing data velocity and volume, more data and data getting larger In addition to more data being created, some of that data is relatively small in terms of the records or data structure entities being stored. However, there can be a large quantity of those smaller data items. In addition to the amount of data, as well as the size of the data, protection or overhead copies of data are also kept. Another dimension is that data is also getting larger where the data structures describing a piece of data for an application have increased in size. For example, a still photograph was taken with a digital camera, cell phone, or another mobile handheld device, drone, or other IoT device, increases in size with each new generation of cameras as there are more megapixels. Variety of Data In addition to having value and volume, there are also different varieties of data, including ephemeral (temporary), persistent, primary, metadata, structured, semi-structured, unstructured, little, and big data. Keep in mind that programs, applications, tools, and utilities get stored as data, while they also use, create, access, and manage data. There is also primary data and metadata, or data about data, as well as system data that is also sometimes referred to as metadata. Here is where context comes into play as part of tradecraft, as there can be metadata describing data being used by programs, as well as metadata about[...]



Application Data Access Lifecycle Patterns Everything Is Not The Same

Tue, 13 Mar 2018 17:16:15 -0600

Application Data Access Life cycle Patterns Everything Is Not The Same(Part V) Application Data Access Life cycle Patterns Everything Is Not The Same This is part five of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from chapter 2 of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017). available at Amazon.com and other global venues. In this post, we look at various application and data lifecycle patterns as well as wrap up this series. Active (Hot), Static (Warm and WORM), or Dormant (Cold) Data and Lifecycles When it comes to Application Data Value, a common question I hear is why not keep all data? If the data has value, and you have a large enough budget, why not? On the other hand, most organizations have a budget and other constraints that determine how much and what data to retain. Another common question I get asked (or told) it isn't the objective to keep less data to cut costs? If the data has no value, then get rid of it. On the other hand, if data has value or unknown value, then find ways to remove the cost of keeping more data for longer periods of time so its value can be realized. In general, the data life cycle (called by some cradle to grave, birth or creation to disposition) is created, save and store, perhaps update and read with changing access patterns over time, along with value. During that time, the data (which includes applications and their settings) will be protected with copies or some other technique, and eventually disposed of. Between the time when data is created and when it is disposed of, there are many variations of what gets done and needs to be done. Considering static data for a moment, some applications and their data, or data and their applications, create data which is for a short period, then goes dormant, then is active again briefly before going cold (see the left side of the following figure). This is a classic application, data, and information life-cycle model (ILM), and tiering or data movement and migration that still applies for some scenarios. Changing data access patterns for different applications However, a newer scenario over the past several years that continues to increase is shown on the right side of the above figure. In this scenario, data is initially active for updates, then goes cold or WORM (Write Once/Read Many); however, it warms back up as a static reference, on the web, as big data, and for other uses where it is used to create new data and information. Data, in addition to its other attributes already mentioned, can be active (hot), residing in a memory cache, buffers inside a server, or on a fast storage appliance or caching appliance. Hot data means that it is actively being used for reads or writes (this is what the term Heat map pertains to in the context of the server, storage data, and applications. The heat map shows where the hot or active data is along with its other characteristics. Context is important here, as there are also IT facilities heat maps, which refer to physical facilities including what servers are consuming power and generating heat. Note that some current and emerging data center infrastructure management (DCIM) tools can correlate the physical facilities power, cooling, and heat to actual work being done from an applications perspective. This correlated or converged management view enables more granular analysis and effective decision-making on how to best utilize data infrastructure resources. In addition to being hot or active, data can be warm (not as heavily accessed) or cold (rarely if ever accessed), as well as online, near-line, or off-line. As thei[...]



Veeam GDPR preparedness experiences Webinar walking the talk

Wed, 7 Mar 2018 18:28:18 -0600

Veeam GDPR preparedness experiences Webinar walking the talk Veeam GDPR preparedness experiences Fireside chat Webinar March 27, 9AM PT This free (register here) fireside chat webinar sponsored by Veeam looks at Veeam GDPR preparedness experiences based on what Veeam did to be ready for the May 25, 2018 Global Data Protection Regulations taking effect. The format of this webinar will be fireside chat between myself and Danny Allan (@DannyAllan5) of Veeam as we discuss the experiences, lessons learned by Veeam during their journey to prepare for GDPR. Danny has put together a five-part blog series here covering some of Veeams findings and lessons learned that you can leverage to prepare for GDPR, as well as what we will discuss among other related topics during the fireside chat webinar. Keep in mind that GDPR is commonly mistaken as just an European regulation when in fact it is global. In addition to being global, it is also inclusive of big as well as small organizations, cloud and non cloud entities, as well as spanning industries, along with different parts of an organization from human resources (HR) to accounting and finance to sales, marketing among others. Join me and Danny Allan as we discuss GDPR along with five key lessons learned during Veeams road to GDPR compliance, as well as how their software solutions played a critical role in managing their own environment. In other words, Veeam is not just talking the talk, they are also walking the talk, eating their own dog food among other clichés. Register for the event, or catch the replay here. Where to learn more Learn more about data protection, GDPR, software defined data center (SDDC), software defined data infrastructures (SDDI), cloud and related topics via the following links: GDPR (General Data Protection Regulation) Resources Are You Ready? GDPR: An overview of Veeam’s 5 lessons learned on our way to compliancy World Backup Day 2018 Data Protection Readiness Reminder Data Infrastructure Server Storage I/O related Tradecraft Overview Data Infrastructure Overview, Its What's Inside of Data Centers 4 3 2 1 and 3 2 1 data protection best practices All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration) Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics) The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources) Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security) Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more Various Data Infrastructure related events, webinars and other activities Server StorageIO.tv (various videos and podcasts, fun and for work) Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book. What this all means and wrap-up Now is the time to be prepared for upcoming GDPR implementation. Join me and Danny Allan to learn what you need to be doing now, as well as compare what you have done or are doing to be prepared for GDPR. Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. Fi[...]



VMware continues cloud construction with March announcements

Wed, 7 Mar 2018 15:05:01 -0600

VMware continues cloud construction with March announcements VMware continues cloud construction with March announcements of new features and other enhancements. VMware Cloud Provides Consistent Operations and Infrastructure Via: VMware.com With its recent announcements, VMware continues cloud construction adding new features, enhancements, partnerships along with services. VMware continues cloud construction, like other vendors and service providers who tried and test the waters of having their own public cloud, VMware has moved beyond its vCloud Air initiative selling that to OVH. VMware which while being a public traded company (VMW) is by way of majority ownership part of the Dell Technologies family of company via the 2016 acquisition of EMC by Dell. What this means is that like Dell Technologies, VMware is focused on providing solutions and services to its cloud provider partners instead of building, deploying and running its own cloud in competition with partners. VMware Cloud Data Infrastructure and SDDC layers Via: VMware.com The VMware Cloud message and strategy is focused around providing software solutions to cloud and other data infrastructure partners (and customers) instead of competing with them (e.g. divesting of vCloud Air, partnering with AWS, IBM Softlayer). Part of the VMware cloud message and strategy is to provide consistent operations and management across clouds, containers, virtual machines (VM) as well as other software defined data center (SDDC) and software defined data infrastructures. In other words, what this means is VMware providing consistent management to leverage common experiences of data infrastructure staff along with resources in a hybrid, cross cloud and software defined environment in support of existing as well as cloud native applications. VMware Cloud on AWS Image via: AWS.com Note that VMware Cloud services run on top of AWS EC2 bare metal (BM) server instances, as well as on BM instances at IBM softlayer as well as OVH. Learn more about AWS EC2 BM compute instances aka Metal as a Service (MaaS) here. In addition to AWS, IBM and OVH, VMware claims over 4,000 regional cloud and managed service providers who have built their data infrastructures out using VMware based technologies. VMware continues cloud construction updates Building off of previous announcements, VMware continues cloud construction with enhancements to their Amazon Web Services (AWS) partnership along with services for IBM Softlayer cloud as well as OVH. As a refresher, OVH is what formerly was known as VMware vCloud air before it was sold off. Besides expanding on existing cloud partner solution offerings, VMware also announced additional cloud, software defined data center (SDDC) and other software defined data infrastructure environment management capabilities. SDDC and Data infrastructure management tools include leveraging VMwares acquisition of Wavefront among others. VMware Cloud Updates and New Features VMware Cloud on AWS European regions (now in London, adding Frankfurt German) Stretch Clusters with synchronous replication for cross geography location resiliency Support for data intensive workloads including data footprint reduction (DFR) with vSAN based compression and data de duplication Fujitsu services offering relationships Expanded VMware Cloud Services enhancements VMware Cloud Services enhancements include: Hybrid Cloud Extension Log intelligence Cost insight Wavefront VMware Cloud in additional AWS Regions As part of service expansion, VMware Cloud on AWS has been extended into European region (London) with plans to expand into Frankfurt and an Asian Pacific location. Previously VMware Cloud[...]



Benefits of Moving Hyper-V Disaster Recovery to the Cloud Webinar

Thu, 15 Feb 2018 13:12:11 -0600

Benefits of Moving Hyper-V Disaster Recovery to the Cloud Webinar Benefits of Moving Hyper-V Disaster Recovery to the Cloud and Achieve global cloud data availability from an Always-On approach with Veeam Cloud Connect webinar. Feb. 28, 2018 at 11am PT / 2pm ET Windows Server and Hyper-V software defined data center (SDDC) based applications need always on availability and access to data which means enabling cloud based data protection (including backup/recovery) for seamless disaster recovery (DR), business continuance (BC), business resiliency (BR) and high availability (HA). Key to an always on, available and accessible environment is having robust  RTO and RPO aligned to your application workload needs. In other words, time for data protection to work for you and your applications instead of you working for it (e.g. the data protection tools and technologies). This free data protection webinar (registration required) sponsored by KeepItSafe produced by Virtualization & Cloud Review will be an interactive webinar discussion (not death by power point or Ui Gui product demo ;)) pertaining to enabling always on application (as well as data) availability for Windows Server and Hyper-V environments. Keep in mind with world backup day coming up on March 31 now is a good time to make sure your applications and data are protected as well as recoverable when something bad happens leveraging Hyper-V Disaster Recovery. Join me along with representatives from Veeam and KeepItSafe for an informal conversation including strategies along with how to enable an always on, always available applications data infrastructure for Hyper-V based solutions. Our conversation will include discussion around: Data protection strategies for Microsoft Windows Server Hyper-V applications Enabling rapid recovery time objectives (RTO) and good recovery point objectives (RPO) Evolving from VM disaster recovery to cloud-based DRaaS Implement 4 3 2 1 data protection availability for Hyper-V with Veeam and KeepItSafe DRaaS Register for the live event or catch the replay here. Where to learn more Learn more about data protection, software defined data center (SDDC), software defined data infrastructures (SDDI), Hyper-V, cloud and related topics via the following links: World Backup Day 2018 Data Protection Readiness Reminder Data Infrastructure Server Storage I/O related Tradecraft Overview Data Infrastructure Overview, Its What's Inside of Data Centers 4 3 2 1 and 3 2 1 data protection best practices All You Need To Know about ROBO Data Protection Backup (free webinar with registration) Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics) The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources) Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security) Various Data Infrastructure related events, webinars and other activities Server StorageIO.tv (various videos and podcasts, fun and for work) Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book. What this all means and wrap-up You can not go forward if you can not go back to a particular point in time (e.g. recovery point objective or RPO). Likewise, if you can not go back to a given RPO, how can you go forward with your business as well as meet your recovery time objective (RTO)? Join us for the live conversation or replay by registering (free) here to learn how t[...]



World Backup Day 2018 Data Protection Readiness Reminder

Sun, 11 Feb 2018 09:08:07 -0600

World Backup Day 2018 Data Protection Readiness Reminder It's that time of year again, World Backup Day 2018 Data Protection Readiness Reminder. In case you have forgotten, or were not aware, this coming Saturday March 31 is World Backup (and recovery day). The annual day is a to remember to make sure you are protecting your applications, data, information, configuration settings as well as data infrastructures. While the emphasis is on Backup, that also means recovery as well as testing to make sure everything is working properly. Its time that the focus of world backup day should expand from just a focus on backup to also broader data protection and things that start with R. Some data protection (and backup) related things, tools, tradecraft techniques, technologies and trends that start with R include readiness, recovery, reconstruct, restore, restart, resume, replication, rollback, roll forward, RAID and erasure codes, resiliency, recovery time objective (RTO), recovery point objective (RPO), replication among others. Keep in mind that Data Protection is a broader focus than just backup and recovery. Data protection includes disaster recovery DR, business continuance BC, business resiliency BR, security (logical and physical), standard and high availability HA, as well as durability, archiving, data footprint reduction, copy data management CDM along with various technologies, tradecraft techniques, tools. Quick Data Protection, Backup and Recovery Checklist Keep the 4 3 2 1 or shorter older 3 2 1 data protection rules in mind Do you know what data, applications, configuration settings, meta data, keys, certificates are being protected? Do you know how many versions, copies, where stored and what is on or off-site, on or off-line? Implement data protection at different intervals and coverage of various layers (application, transaction, database, file system, operating system, hypervisors, device or volume among others) Have you protected your data protection environment including software, configuration, catalogs, indexes, databases along with management tools? Verify that data protection point in time copies (backups, snapshots, consistency points, checkpoints, version, replicas) are working as intended Make sure that not only are the point in time protection copies running when scheduled, also that they are protected what's intended Test to see if the protection copies can actually be used, this means restoring as well as accessing the data via applications Watch out to prevent a disaster in the course of testing, plan, prepare, practice, learn, refine, improve In addition to verifying your data protection (backup, bc, dr) for work, also take time to see how your home or personal data is protected View additional tips, techniques, checklist items in this Data Protection fundamentals series of posts here. Where To Learn More View additional Data Infrastructure Data Protection and related tools, trends, technology and tradecraft skills topics via the following links. Data Protection Diaries series Part 1 – Data Infrastructure Data Protection Fundamentals Part 2 – Reliability, Availability, Serviceability ( RAS) Data Protection Fundamentals Part 3 – Data Protection Fundamental Access Availability RAID Erasure Codes ( EC) including LRC Part 4 – Data Protection Recovery Points (Archive, Backup, Snapshots, Versions) Part 5 – Point In Time Data Protection Gra[...]



Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot

Fri, 2 Feb 2018 13:14:15 -0600

Use NVMe U.2 SFF 8639 disk drive form factor SSD in PCIe slot Need to install or use an Intel Optane NVMe 900P or other Nonvolatile Memory (NVM) Express NVMe based U.2 SFF 8639 disk drive form factor Solid State Device (SSD) into PCIe a slot? For example, I needed to connect an Intel Optane NVMe 900P U.2 SFF 8639 drive form factor SSD into one of my servers using an available PCIe slot. The solution I used was an carrier adapter card such as those from Ableconn (PEXU2-132 NVMe 2.5-inch U.2 [SFF-8639] via Amazon.com among other global venues. Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier The above image shows top an Intel 750 NVMe PCIe Add in Card (AiC) SSD and on the bottom an Intel Optane NVMe 900P 280GB U.2 (SFF 8639) drive form factor SSD mounted on an Ableconn carrier adapter. NVMe Tradecraft Refresher NVMe is the protocol that is implemented with different topologies including local via PCIe using U.2 aka SFF-8639 (aka disk drive form factor), M.2 aka Next Generation Form Factor (NGFF) also known as "gum stick", along with PCIe Add in Card (AiC). NVMe accessed devices can be installed in laptops, ultra books, workstations, servers and storage systems using the various form factors. U.2 drives are also refereed to by some as PCIe drives in that the NVMe command set protocol is implemented using PCIe x4 physical connection to the devices. Jump ahead if you want to skip over the NVMe primer refresh material to learn more about U.2 8639 devices. Various SSD device form factors and interfaces In addition to form factor, NVMe devices can be direct attached and dedicated, rack and shared, as well as accessed via networks also known as fabrics such as NVMe over Fabrics. The many facets of NVMe as a front-end, back-end, direct attach and fabric Context is important with NVMe in that fabric can mean NVMe over Fibre Channel (FC-NVMe) where the NVMe command set protocol is used in place of SCSI Fibre Channel Protocol (e.g. SCSI_FCP) aka FCP or what many simply know and refer to as Fibre Channel. NVMe over Fabric can also mean NVMe command set implemented over an RDMA over Converged Ethernet (RoCE) based network. Another point of context is not to confuse Nonvolatile Memory (NVM) which are the storage or memory media and NVMe which is the interface for accessing storage (e.g. similar to SAS, SATA and others). As a refresher, NVM or the media are the various persistent memories (PM) including NVRAM, NAND Flash, 3D XPoint along with other storage class memories (SCM) used in SSD (in various packaging). Learn more about 3D XPoint with the following resources: Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage Part II – Intel and Micron new 3D XPoint server and storage NVM Part III – 3D XPoint new server storage memory from Intel and Micron Learn more (or refresh) your NVMe server storage I/O knowledge, experience tradecraft skill set with this post here. View this piece here looking at NVM vs. NVMe and how one is the media where data is stored, while the other is an access protocol (e.g. NVMe). Also visit www.thenvmeplace.com to view additional NVMe tips, tools, technologies, and related resources. NVMe U.2 SFF-8639 aka 8639 SSD On quick glance, an NVMe U.2 SFF-8639 SSD may look like a SAS small form factor (SFF) 2.5" HDD or SSD. Also, keep in mind that HDD and SSD with SAS interface have a small tab to prevent inserting them into a SATA port. As a reminder, SATA devices can plug into SAS ports, however not the other way around which is what the key tab function does (prevents accidental in[...]



How to Achieve Flexible Data Protection Availability with All Flash Storage Solutions

Wed, 10 Jan 2018 13:14:15 -0600

Achieve Flexible Data Protection Availability with All Flash Storage Solutions By Greg Schulz - www.storageioblog.com January 10, 2018 How to Achieve Flexible Data Protection and Availability with All-Flash Storage Solutions Interactive webinar discussion (not death by power point or Ui Gui product demo ;) ) Tuesday January 30 2018 11AM PT / 2PM ET Via Redmond Magazine (Free with registration) Everything is not the same across different organizations, environments, application workloads and the data infrastructures that support them. Fast application and workloads need fast protection, restoration, and resumption as well as fast flash storage. This applies across legacy, software-defined, virtual, container, cloud, hybrid, converged and HCI among other environments. Join me along with representatives from Pure Storage along with Veeam for this interactive discussion as we explore how to boost the performance, availability, capacity, and economics (PACE) of your applications along with the data infrastructures that support them. How all-flash storage enables faster protection and restoration of fast applications Why data protection and availability should not be an afterthought Ways to leverage your data protection storage to drive business change How to simplify and reduce complexity to boost productivity while lowering costs Why workload aggregation consolidation should not cause aggravation Register for the live event or catch the replay here. Where to learn more Learn more about data protection, SSD, flash, data infrastructure and related topics via the following links: Data Infrastructure Server Storage I/O related Tradecraft Overview Data Infrastructure Overview, Its Whats Inside of Data Cetners All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration) Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics) The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources) Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security) Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more Various Data Infrastructure related events, webinars and other activities Server StorageIO.tv (various videos and podcasts, fun and for work) What this all means and wrap-up Fast applications need fast and resilient data infrastructures that include server, storage, I/O networking along with data protection. Likewise performance depends on availability along with durability, likewise, availability and accessibility depend on performance, they go hand in hand. Join me and others from Pure Storage as well as Veeam for this conversational discussion about How to Achieve Flexible Data Protection and Availability with All-Flash Storage Solutions. Ok, nuff said, for now… Cheers Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is fo[...]



2017 Holiday Seasons Greetings From Server StorageIO

Sat, 23 Dec 2017 14:13:12 -0600


2017 Holiday Seasons Greetings


Greetings from Server StorageIO wishing you, your data infrastructures as well as friends and family's all the best for a safe, merry happy 2017 holiday season.

(image)

Btw, if you have not done so lately, check out storageio.tv to view various videos, podcasts and images. Some of the storageio.tv content is data infrastructure technology, tools, technique, tradecraft and trends related. Also on storageio.tv are fun content including views from the drone as well as cooking (food and recipes) among other items.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2018 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.







IT transformation Serverless Life Beyond DevOps with New York Times CTO Nick Rockwell Podcast

Thu, 30 Nov 2017 19:18:17 -0600

By Greg Schulz - www.storageioblog.com November 30, 2017 In this Server StorageIO podcast episode New York Times CTO / CIO Nick Rockwell (@nicksrockwell) joins me for a conversation discussing Digital, Business and IT transformation, Serverless Life Beyond DevOps and related topics. In our conversation we discuss challenges with metrics, understanding value vs. cost particular for software, Nicks perspective as both a CIO and CTO of the New York Times, importance of IT being involved and understanding the business vs. just being technology focused. We also discuss the bigger broader opportunity of serverless (aka micro services, containers) life beyond DevOps and how higher level business logic developers can benefit from the technology instead of just a DevOps for infrastructure focus. Buzzwords, buzz terms and themes include datacenter technologies, NY Times, data infrastructure, management, trends, metrics, digital transformation, tradecraft skills, DevOps, serverless among others. . Check out Nicks post The Futile Resistance to Serverless here. Listen to the podcast discussion here (MP3 16 minutes and 50 seconds) as well as on iTunes here. Where to learn more Learn more about Oracle, Database Performance, Benchmarking along with other tools via the following links: The Futile Resistance to Serverless (via Nick Rockwell blog) Nick Rockwell New York Times profile, Nick on Twitter (@NickRocksWell) Data Infrastructure Server Storage I/O related Tradecraft Overview Did you want a side of SLBS (server less BS) with your software or hardware FUD? Software Defined Data Infrastructure Essentials (CRC Press 2017) Server StorageIO.tv (various videos and podcasts, fun and for work) What this all means and wrap-up Check out my discussion here (MP3) with Nick Rockwell as we discuss IT and business transition, metrics, software development, and serverless life beyond DevOps. Also available on  Ok, nuff said, for now… Cheers Gs Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden. All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2018 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO. [...]



Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips

Sun, 26 Nov 2017 18:17:16 -0600

Data Protection Diaries Fundamental Topics Tools Techniques Technologies Tips Companion to Software Defined Data Infrasture Essentials - Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft ( CRC Press 2017) By Greg Schulz - www.storageioblog.com November 26, 2017 This is Part I of a multi-part series on Data Protection fundamental tools topics techniques terms technologies trends tradecraft tips as a follow up to my Data Protection Diaries series, as well as a companion to my new book Software Defined Data Infrastructure Essentials - Cloud, Converged, Virtual Server Storage I/O Fundamental tradecraft (CRC Press 2017). The focus of this series is around data protection including Data Infrastructure Services: Availability, RAS, RAID and Erasure Codes (including LRC) ( Chapter 9), Data Infrastructure Services: Availability, Recovery Point ( Chapter 10). Additional Data Protection related chapters include Storage Mediums and Component Devices ( Chapter 7), Management, Access, Tenancy, and Performance ( Chapter 8), as well as Capacity, Data Footprint Reduction ( Chapter 11), Storage Systems and Solutions Products and Cloud ( Chapter 12), Data Infrastructure and Software-Defined Management ( Chapter 13) among others. Post in the series includes excerpts from Software Defined Data Infrasture (SDDI) pertaining to data protection for legacy along with software defined data centers ( SDDC), data infrastructures in general along with related topics. In addition to excerpts, the posts also contain links to articles, tips, posts, videos, webinars, events and other companion material. Note that figure numbers in this series are those from the SDDI book and not in the order that they appear in the posts. Posts in this series include: Part 1 - Data Infrastructure Data Protection Fundamentals Part 2 - Reliability, Availability, Serviceability ( RAS) Data Protection Fundamentals Part 3 - Data Protection Access Availability RAID Erasure Codes ( EC) including LRC Part 4 - Data Protection Recovery Points (Archive, Backup, Snapshots, Versions) Part 5 - Point In Time Data Protection Granularity Points of Interest Part 6 - Data Protection Security Logical Physical Software Defined Part 7 - Data Protection Tools, Technologies, Toolbox, Buzzword Bingo Trends Part 8 - Data Protection Diaries Walking Data Protection Talk Part 9 - who's Doing What ( Toolbox Technology Tools) Part 10 - Data Protection Resources Where to Learn More Figure 1.5 Data Infrastructures and other IT Infrastructure Layers Data Infrastructures Data Infrastructures exists to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of data infrastructures is to provide a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective. Put another way, data infrastructures exist to protect, preserve, process, move, secure and serve data as well as their applications for information services delivery. Technologies that make up data infrastructures include hardware, software, or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud. Read more about data infrastructures (its what's inside data centers) here. Various Needs Demand Drivers For Data Protection Why The Need For D[...]



HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads

Mon, 20 Nov 2017 14:15:16 -0600

HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads By Greg Schulz - www.storageioblog.com November 20, 2017 HPE Announced today a new AMD EPYC 7000 Powered Gen 10 ProLiant DL385 for Software Defined Workloads including server virtualization, software-defined data center (SDDC), software-defined data infrastructure (SDDI), software-defined storage among others. These new servers are part of a broader Gen10 HPE portfolio of ProLiant DL systems. 24 Small Form Factor Drive front view DL385 Gen 10 Via HPE The value proposition being promoted by HPE of these new AMD powered Gen 10 DL385 servers besides supporting software-defined, SDDI, SDDC, and related workloads are security, density and lower price than others. HPE is claiming with the new AMD EPYC system on a chip (SoC) processor powered Gen 10 DL385 that it is offering up to 50 percent lower cost per virtual machine (VM) than traditional server solutions. About HPE AMD Powered Gen 10 DL385 HPE AMD EPYC 7000 Gen 10 DL385 features: 2U (height) form factor HPE OneView and iLO management Flexible HPE finance options Data Infrastructure Security AMD EPYC 7000 System on Chip (SoC) processors NVMe storage (Embedded M.2 and U.2/8639 Small Form Factor (SFF) e.g. drive form factor) Address server I/O and memory bottlenecks These new HPE servers are positioned for: Software Defined, Server Virtualization Virtual Desktop Infrastructure (VDI) workspaces HPC, Cloud and other general high-density workloads General Data Infrastructure workloads that benefit from memory-centric or GPUs Different AMD Powered DL385 ProLiant Gen 10 Packaging Options Common across AMD EPYC 7000 powered Gen 10 DL385 servers are 2U high form factor, iLO management software and interfaces, flexible LAN on Motherboard (LOM) options, MicroSD (optional dual MicroSD), NVMe (embedded M.2 and SFF U.2) server storage I/O interface and drives, health and status LEDs, GPU support, single or dual socket processors. HPE DL385 Gen10 Inside View Via HPE HPE DL385 Gen10 Rear View Via HPE Other up to three storage drive bays, support for Large Form Factor (LFF) and Small Form Factor (SFF) devices (HDD and SSD) including SFF NVMe (e.g., U.2) SSD. Up to 4 x Gbe NICs, PCIe riser for GPU (optional second riser requires the second processor). Other features and options include HPE SmartArray (RAID), up to 6 cooling fans, internal and external USB 3. Optional universal media bay that can also add a front display, optional Optical Disc Drive (ODD), optional 2 x U.2 NVMe SFF SSD. Note media bay occupies one of three storage drive bays. HPE DL385 Form Factor Via HPE Up to 3 x Drive Bays Up to 12 LFF drives (2 per bay) Up to 24 SFF drives ( 3 x 8 drive bays, 6 SFF + 2 NVMe U.2 or 8 x NVMe) AMD EPYC 7000 Series The AMD EPYC 7000 series is available in the single and dual socket. View additional AMD EPYC speeds and feeds in this data sheet (PDF), along with AMD server benchmarks here. HPE DL385 Gen 10 AMD EPYC Specifications Via HPE AMD EPYC 7000 General Features Single and dual socket Up to 32 cores, 64 threads per socket Up to 16 DDR4 DIMMS over eight channels per socket (e.g., up to 2TB RAM) Up to 128 PCIe Gen 3 lanes (e.g. combination of x4, x8, x16 etc) Future 128GB DIMM support AMD EPYC 7000 Security Features Secure processor and secure boot for malware rootkit protection System memory encryption (SME) Secure Encrypted [...]



AWS Announces New S3 Cloud Storage Security Encryption Features

Tue, 14 Nov 2017 19:19:19 -0600

Amazon Web Services (AWS) recently announced new Simple Storage Service (S3) encryption and security enhancements including Default Encryption, Permission Checks, Cross-Region Replication ACL Overwrite, Cross-Region Replication with KMS and Detailed Inventory Report. Another recent announcement by AWS is for PrivateLinks endpoints within a Virtual Private Cloud (VPC). AWS Service Dashboard Default Encryption Extending previous security features, now you can mandate all objects stored in a given S3 bucket be encrypted without specifying a bucket policy that rejects non-encrypted objects. There are three server-side encryption (SSE) options for S3 objects including keys managed by S3, AWS KMS and SSE Customer ( SSE-C) managed keys. These options provide more flexibility as well as control for different environments along with increased granularity. Note that encryption can be forced on all objects in a bucket by specifying a bucket encryption configuration. When an unencrypted object is stored in an encrypted bucket, it will inherit the same encryption as the bucket, or, alternately specified by a PUT required. AWS S3 Buckets Permission Checks There is now an indicator on the S3 console dashboard prominently indicating which S3 buckets are publicly accessible. In the above image, some of my AWS S3 buckets are shown including one that is public facing. Note in the image above how there is a notion next to buckets that are open to public. Cross-Region Replication ACL Overwrite and KMS AWS Key Management Service (KMS) keys can be used for encrypting objects. Building on previous cross-region replication capabilities, now when you replicate objects across AWS accounts, a new ACL providing full access to the destination account can be specified. Detailed Inventory Report The S3 Inventory report ( which can also be encrypted) now includes the encryption status of each object. PrivateLink for AWS Services PrivateLinks enable AWS customers to access services from a VPC without using a public IP as well as traffic not having to go across the internet (e.g. keeps traffic within the AWS network. PrivateLink endpoints appear in Elastic Network Interface (ENI) with private IPs in your VPC and are highly available, resiliency and scalable. Besides scaling and resiliency, PrivateLink eliminates the need for white listing of public IPs as well as managing internet gateway, NAT and firewall proxies to connect to AWS services (Elastic Cloud Compute (EC2), Elastic Load Balancer (ELB), Kinesis Streams, Service Catalog, EC2 Systems Manager). Learn more about AWS PrivateLink for services here including  VPC Endpoint Pricing here.  Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) AWS S3 Storage Gateway Revisited (Part I) Amazon Web Service AWS September 2017 Software Defined Data Infrastructure Updates S3motion Buckets Containers Objects AWS S3 Cloud and EMCcode Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3) Cloud Conversations: AWS S3 Cross Region Replication storage enhancements Part II Revisiting AWS S3 Storage Gateway (Test Drive Deployment) Software Defined Data Infrastructure Essentials (CRC Press 2017) What This All Means Common cloud concern considerations include privacy and security. AWS S3 among other industry cloud service and storag[...]



October 2017 Server StorageIO Data Infrastructure Update Newsletter

Mon, 30 Oct 2017 23:23:23 -0600

Server StorageIO October 2017 Data Infrastructure Update Newsletter [...]



Data Infrastructure server storage I/O network Recommended Reading #blogtober

Mon, 30 Oct 2017 22:22:22 -0600

Data Infrastructure Recommended Reading List and Book Shelf Updated 10/30/17 The following is an evolving recommended reading list of data infrastructure topics including, server, storage I/O, networking, cloud, virtual, container, data protection and related topics that includes books, blogs, podcast's, events and industry links among other resources. Various Data Infrastructure including hardware, software, services related links: Links A-E Links F-J Links K-O Links P-T Links U-Z Other Links In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017), the following are Server StorageIO recommended reading, watching and listening list items. The list includes various IT, Data Infrastructure and related topics. Intel Recommended Reading List (IRRL) for developers is a good resource to check out. Check out the Blogtober list of check out some of the blogs and posts occurring during October 2017 here. Preston De Guise aka @backupbear is Author of several books has an interesting new site Foolsrushin.info that looks at topics including Ethics in IT among others. Check out his new book Data Protection: Ensuring Data Availability (CRC Press 2017) and available via Amazon.com here. Brendan Gregg has a great site for Linux performance related topics here. Greg Knieriemen has a must read weekly blog, post, column collection of whats going on in and around the IT and data infrastructure related industries, Check it out here. Interested in file systems, CIFS, SMB, SAMBA and related topics then check out Chris Hertels book on implementing CIFS here at Amazon.com For those involved with VMware, check out Frank Denneman VMware vSphere 6.5 host resource guide-book here at Amazon.com. Docker: Up & Running: Shipping Reliable Containers in Production by Karl Matthias & Sean P. Kane via Amazon.com here. Essential Virtual SAN (VSAN): Administrator's Guide to VMware Virtual SAN,2nd ed. by Cormac Hogan & Duncan Epping via Amazon.com here. Hadoop: The Definitive Guide: Storage and Analysis at Internet Scale by Tom White via Amazon.com here. Systems Performance: Enterprise and the Cloud by Brendan Gregg Via Amazon.com here. Implementing Cloud Storage with OpenStack Swift by Amar Kapadia, Sreedhar Varma, & Kris Rajana Via Amazon.com here. The Human Face of Big Data by Rick Smolan & Jennifer Erwitt Via Amazon.com here. VMware vSphere 5.1 Clustering Deepdive (Vol. 1) by Duncan Epping & Frank Denneman Via Amazon.com here. Note: This is an older title, but there are still good fundamentals in it. Linux Administration: A Beginners Guide by Wale Soyinka Via Amazon.com here. TCP/IP Network Administration by Craig Hunt Via Amazon.com here. Cisco IOS Cookbook: Field tested solutions to Cisco Router Problems by Kevin Dooley and Ian Brown Via Amazon.com here. I often mention in presentations a must have for anybody involved with software defined anything, or programming for that matter which is the Niklaus Wirth classic Algorithms + Data Structures = Programs that you can get on Amazon.com here. Another great book to have is Seven Databases in Seven Weeks (here is a book review) which not only provides an overview of popular NoSQL databases such as Cassandra, Mongo, HBASE among others, lots of g[...]



PCIe Server Storage I/O Network Fundamentals #blogtober

Sat, 28 Oct 2017 23:13:03 -0600

Peripheral Computer Interconnect Express aka PCIe is a Server, Storage, I/O networking fundamental component. This post is an excerpt from chapter 4 (Chapter 4: Servers: Physical, Virtual, Cloud, and Containers) of my new book Software Defined Data Infrastructure Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O Tradecraft (CRC Press 2017) Available via Amazon.com and other global venues. In this post, we look various PCIe fundamentals to learn and expand or refresh your server, storage, and I/O and networking tradecraft skills experience. PCIe fundamental common server I/O component Common to all servers is some form of a main system board, which can range from a few square meters in supercomputers, data center rack, tower, and micro towers converged or standalone, to small Intel NUC (Next Unit of Compute), MSI and Kepler-47 footprint, or Raspberry Pi-type desktop servers and laptops. Likewise, PCIe is commonly found in storage and networking systems, appliances among other devices. For example, a blade server will have multiple server blades or modules, each with its motherboard, which shares a common back plane for connectivity. Another variation is a large server such as an IBM “Z” mainframe, Cray, or another supercomputer that consists of many specialized boards that function similar to a smaller-sized motherboard on a larger scale. Some motherboards also have mezzanine or daughter boards for attachment of additional I/O networking or specialized devices. The following figure shows a generic example of a two-socket, with eight-memory-channel-type server architecture. Generic computer server hardware architecture. Source: Software Defined Data Infrastructure Essentials (CRC Press 2017) The above figure shows several PCIe, USB, SAS, SATA, 10 GbE LAN, and other I/O ports. Different servers will have various combinations of processor, and Dual Inline Memory Module (DIMM) Dynamic RAM (DRAM) sockets along with other features. What will also vary are the type and some I/O and storage expansion ports, power and cooling, along with management tools or included software. PCIe, Including Mini-PCIe, NVMe, U.2, M.2, and GPU At the heart of many servers I/O and connectivity solutions are the PCIe industry-standard interface (see PCIsig.com). PCIe is used to communicate with CPUs and the outside world of I/O networking devices. The importance of a faster and more efficient PCIe bus is to support more data moving in and out of servers while accessing fast external networks and storage. For example, a server with a 40-GbE NIC or adapter would have to have a PCIe port capable of 5 GB per second. If multiple 40-GbE ports are attached to a server, you can see where the need for faster PCIe interfaces come into play. As more VM are consolidated onto PM, as applications place more performance demand either regarding bandwidth or activity (IOPS, frames, or packets) per second, more 10-GbE adapters will be needed until the price of 40-GbE (also 25, 50 or 100 Gbe) becomes affordable. It is not if, but rather when you will grow into the performance needs on either a bandwidth/throughput basis or to support more activity and lower latency per interface. PCIe is a serial interface specified for how servers communicate between CPUs, memory, and motherboard-mounted as well as AiC devices. This communication includes support attachment of onbo[...]



Introducing Windows Subsystem for Linux WSL Overview #blogtober

Wed, 25 Oct 2017 18:17:16 -0600

Introducing Windows Subsystem for Linux WSL Overview #blogtober Introducing Windows Subsystem for Linux WSL and Overview. Microsoft has been increasing their support of Linux across Azure public cloud, Hyper-V and Linux Integration Services (LIS) and Windows platforms including Windows Subsystem for Linux (WSL) as well as Server along with Docker support. WSL with Ubuntu installed and open in a window on one of my Windows 10 systems. WSL is not a virtual machine (VM) running on Windows or Hyper-V, rather it is a subsystem that coexists next to win32 (read more about how it works and features, enhancements here). Once installed, WSL enables use of Linux bash shell along with familiar tools (find, grep, sed, awk, rsync among others) as well as services such as ssh, MySQL among others. What this all means is that if you work with both Windows and Linux, you can do so on the same desktop, laptop, server or system using your preferred commands. For example in one window you can be using Powershell or traditional Windows commands and tools, while in another window working with grep, find and other tools eliminating the need to install things such as wingrep among others. Installing WSL Depending on which release of Windows desktop or server you are running, there are a couple of different install paths. Since my Windows 10 is the most recent release (e.g. 1709) I was able to simply go to the Microsoft Windows Store via desktop, search for Windows Linux, select the distribution, install and launch. Microsoft has some useful information for installing WSL on different Windows version here, as well as for Windows Servers here. Get WSL from Windows Store or more information and options here. Click on Get the app Select desired WSL distribution Lests select SUSE as I already have Ubuntu installed (I have both) SUSE WSL in the process of downloading. Note SUSE needs an access code (free) that you get from https://www.suse.com/subscriptions/sles/developer/ while waiting for the download and install is a good time to get that code. Launching WSL with SUSE, you will be prompted to enter the code mentioned above, if you do not have a code, get it here from SUSE. The WSL installation is very straight forward, enter the SUSE code (Ubuntu did not need a code). Note the Ubuntu and SUSE WSL task bar icons circled bottom center. Provide a username for accessing the WSL bash shell along with password, confirm how root and sudo to be applied and that is it. Serious, the install for WSL at least with Windows 10 1709 is that fast and easy. Note in the above image, I have WSL with Ubuntu open in a window on the left, WSL with SUSE on the right, and their taskbar icons bottom center. Enable Windows Subsystem for Linux Feature on Windows If you get the above WSL error message 0x8007007e when installing WSL Ubuntu, SUSE or other shell distro, make sure to enable the Windows WSL feature if not already installed. One option is to install additional Windows features via settings or control panel. For example, Control panel -> Programs and features -> Turn Windows features on or off -> Check the box for Windows Subsystem for Linux Another option is to install Windows subsystem feature via Powershell for example. enable-windowsoptionalfeature -online -featurename microsoft-windows-subsy[...]



Fixing the Windows 10 1709 post upgrade restart loop

Mon, 23 Oct 2017 23:23:23 -0600

Recently I needed to upgrad one of my systems to Microsoft Windows 10 1709 (e.g. the September 2017) release that post upgrade resulted in Windows Explorer, desktop and taskbar going into an endless loop. For those not familiar with Windows 10 1709 learn more here, and here including on how to get the bits (e.g. software). Windows 10 1709 is a semi-annual channel (SAC) Microsoft is following to enable a faster cadence or pace of releases making new features available faster. Note that there is a Windows 10 1709 SAC, as well as Windows Server 2017 SAC (more on that here). All was well with the 1709 install on Windows 10 until post upgrade when I logged into my account on my laptop (Lenovo X1). Once logged in initially everything looked good until about 10 to 20 seconds later, the screen flickered, the desktop refreshed as did the taskbar. All was well for about another 10 to 20 seconds and again the desktop refreshed as did the taskbar. Trying to use the Windows key plus other keys was no success, likewise trying to use command prompt, Powershell or other tools was futile given how quick the refresh occurred. Powering off the system and rebooting seemed normal, until once logged in and again the desktop and taskbar reset in the same looping fashion. Once again did a shutdown and restart, logged in and the same result. The Safe Mode Fix Unless you can access a command prompt or Powershell with administrator privileges, boot into Windows Safe mode. The solution to the post Windows 10 1709 upgrade desktop and taskbar restart loop was to boot into safe mode and run the following three commands. sfc /scannow dism.exe /online /cleanup-image /scanhealth dism.exe /online /cleanup-image /restorehealth Before you can run the above commands, access Windows Safe Mode. Tip if your Windows 10 system presents a login screen, in the lower right corner select the Shutdown, Restart icon holding down the SHIFT key and select Restart. Your system should reboot presenting you with the following options, selecting Troubleshoot. Next select Advanced options shown below. Next select Startup Settings shown below. Note that this sequence of commands are also used for other troubleshooting scenarios including boot problems, restore image or to a previous protection point among other options. The following Startup Settings screen appears, select Restart to enter Safe Mode. Your system should then present the following options, select Safe Mode with Command Prompt (option 6). Next your system should display a Command Prompt where the following three commands are run: sfc /scannow dism.exe /online /cleanup-image /scanhealth dism.exe /online /cleanup-image /restorehealth Exit, shutdown, reboot and all should be good. Some Tips and Recommendations Before any upgrade, make sure you have good backups to enable various recovery points if needed. If you have not done so recently, make sure you have system restore enabled, as well as underlying hypervisors or storage system snapshot. If you have bitlocker enabled, before you do any upgrade, make sure to have a copy of your keys handy if you need to use them. If you rely on PIN or fingerprint for login, make sure you have your real password handy. If you have not done so recently, make sure your secondary standby emergency access account is working, if you [...]



Cloud Conversations Azure AWS Service Maps via Microsoft

Wed, 18 Oct 2017 10:11:22 -0600

Cloud Conversations Azure AWS Service Maps via Microsoft Microsoft has created an Azure and Amazon Web Service (AWS) Service Map (corresponding services from both providers). Image via Azure.Microsoft.com Note that this is an evolving work in progress from Microsoft and use it as a tool to help position the different services from Azure and AWS. Also note that not all features or services may not be available in different regions, visit Azure and AWS sites to see current availability. As with any comparison they are often dated the day they are posted hence this is a work in progress. If you are looking for another Microsoft created why Azure vs. AWS then check out this here. If you are looking for an AWS vs. Azure, do a simple Google (or Bing) search and watch all the various items appear, some sponsored, some not so sponsored among others. Whats In the Service Map The following AWS and Azure services are mapped: Marketplace (e.g. where you select service offerings) Compute (Virtual Machines instances, Containers, Virtual Private Servers, Serverless Microservices and Management) Storage (Primary, Secondary, Archive, Premium SSD and HDD, Block, File, Object/Blobs, Tables, Queues, Import/Export, Bulk transfer, Backup, Data Protection, Disaster Recovery, Gateways) Network & Content Delivery (Virtual networking, virtual private networks and virtual private cloud, domain name services (DNS), content delivery network (CDN), load balancing, direct connect, edge, alerts) Database (Relational, SQL and NoSQL document and key value, caching, database migration) Analytics and Big Data (data warehouse, data lake, data processing, real-time and batch, data orchestration, data platforms, analytics) Intelligence and IoT (IoT hub and gateways, speech recognition, visualization, search, machine learning, AI) Management and Monitoring (management, monitoring, advisor, DevOps) Mobile Services (management, monitoring, administration) Security, Identity and Access (Security, directory services, compliance, authorization, authentication, encryption, firewall Developer Tools (workflow, messaging, email, API management, media trans coding, development tools, testing, DevOps) Enterprise Integration (application integration, content management) Down load a PDF version of the service map from Microsoft here. Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. What's a data infrastructure? (Via NetworkWorld) Introduction to Azure Files (Microsoft Docs) Public preview of Virtual Network service endpoints for Azure storage and SQL database Azure debuts Availability Zones for high availability and resiliency (Azure) Azure and AWS comparison (Docs Microsoft) Cloud Storage Considerations (via WServerNews) Microsoft Windows Server, Azure, Nano Life cycle Updates Overview Review of Microsoft ReFS (Reliable File System) and resource links Software Defined Data Infrastructure Essentials (CRC Press) book companion page Amazon Web Service AWS September 2017 Updates AWS S3 Storage Gateway Revisited (Part I) Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3) Microsoft September 2017 Updates Wha[...]



September 2017 Server StorageIO Data Infrastructure Update Newsletter

Wed, 11 Oct 2017 00:11:22 -0600

Server StorageIO September 2017 Data Infrastructure Update Newsletter [...]



Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

Wed, 11 Oct 2017 00:11:22 -0600

Microsoft September 2017 Software Defined Data Infrastructure Updates Microsoft September 2017 Software Defined Data infrastructure Updates September was a busy month for data infrastructure topics as well as Microsoft in terms of new and enhanced technologies. Wrapping up September was Microsoft Ignite where Azure, Azure Stack, Windows, O365, AI, IoT, development tools announcements occurred, along with others from earlier in the month. As part of the September announcements, Microsoft released a new version of Windows server (e.g. 1709) that has a focus for enhanced container support. Note that if you have deployed Storage Spaces Direct (S2D) and are looking to upgrade to 1709, do your homework as there are some caveats that will cause you to wait for the next release. Note that there had been new storage related enhancements slated for the September update, however those were announced at Ignite to being pushed to the next semi-annual release. Learn more here and also here. Azure Files and NFS Microsoft made several Azure file storage related announcements and public previews during September including Native NFS based file sharing as companion to existing Azure Files, along with public preview of new Azure File Sync Service. Native NFS based file sharing (public preview announced, service is slated to be available in 2018) is a software defined storage deployment of NetApp OnTAP running on top of Azure data infrastructure including virtual machines and leverage Azure underlying storage. Note that the new native NFS is in addition to the earlier native Azure Files accessed via HTTP REST and SMB3 enabling sharing of files inside Azure public cloud, as well as accessible externally from Windows based and Linux platforms including on premises. Learn more about Azure Storage and Azure Files here. Azure File Sync (AFS) Azure File Sync (AFS) has now entered public preview. While users of Windows-based systems have been able to access and share Azure Files in the past, AFS is something different. I have used AFS for some time now during several private preview iterations having seen how it has evolved, along with how Microsoft listens incorporating feedback into the solution. Lets take a look at what is AFS, what it does, how it works, where and when to use it among other considerations. With AFS, different and independent systems can now synchronize file shares through Azure. Currently in the AFS preview Windows Server 2012 and 2016 are supported including bare metal, virtual, and cloud based. For example I have had bare metal, virtual (VMware), cloud (Azure and AWS) as part of participating in a file sync activities using AFS. Not to be confused with some other storage related AFS including Andrew File System among others, the new Microsoft Azure File Sync service enables files to be synchronized across different servers via Azure. This is different then the previous available Azure File Share service that enables files stored in Azure cloud storage to be accessed via Windows and Linux systems within Azure, as well as natively by Windows platforms outside of Azure. Likewise this is different from the recently announced Microsoft Azure native NFS file sharing serving service in partnership with NetApp (e.g. [...]



Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates

Wed, 11 Oct 2017 00:11:22 -0600

Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates September was a busy month including VMworld in Las Vegas that featured many Dell EMC VMware (among other) software defined data infrastructure updates and announcements. A summary of September VMware (and partner) related announcements include: Workspace, security and endpoint solutions Pivotal Container Service (PKS) with Google for Kubernetes serverless container management, DXC partnership for hybrid cloud management Security enablement via AppDefense Data infrastructure platform enhancements (integrated OpenStack, vRealize management tools, vSAN) Multi-cloud and hybrid cloud support along with VMware on AWS Dell EMC data protection for VMware and AWS environments. VMware and AWS via Amazon Web Services VMware and AWS Some of you might recall VMware earlier attempt at public cloud with vCloud Air service (see Server StorageIO lab test drive here) which has since been depreciated (e.g. retired). This new approach by VMware leverages the large global presence of AWS enabling customers to set up public or hybrid vSphere, vSAN and NSX based clouds, as well as software defined data centers (SDDC) and software defined data infrastructures (SDDI). VMware Cloud on AWS exists on a dedicated, single-tenant (unlike Elastic Cloud Compute (EC2) multi-tenant instances or VMs) that supports from 4 to 16 underlying host per cluster. Unlike EC2 virtual machine instances, VMware Cloud on AWS is delivered on elastic bare-metal (e.g. dedicated private servers aka DPS). Note AWS EC2 is more commonly known, AWS also has other options for server compute including Lambda micro services serverless containers, as well as Lightsail virtual private servers (VPS). Besides servers with storage optimized I/O featuring low latency NVMe accessed SSDs, and applicable underlying server I/O networking, VMware Cloud on AWS leverages the VMware software stack directly on underlying host servers (e.g. there is no virtualization nesting taking place). This means more robust performance should be expected like in your on premise VMware environment. VM workloads can move between your onsite VMware systems and VMware Cloud on AWS using various tools. The VMware Cloud on AWS is delivered and managed by VMware, including pricing. Learn more about VMware Cloud on AWS here, and here (VMware PDF) and here (VMware Hands On Lab aka HOL). Read more about AWS September news and related updates here in this StorageIOblog post. VMware and Pivotal PKS via VMware.com Pivotal Container Service (PKS) and Google Kubernetes Partnership During VMworld VMware, Pivotal and Google announced a partnership for enabling Kubernetes container management called PKS (Pivotal Container Service). Kubernetes is evolving as a popular open source container microservice serverless management orchestration platform that has roots within Google. What this means is that what is good for Google and others for managing containers, is now good for VMware and Pivotal. In related news, VMware has become a platinum sponsor of the Cloud Native Compute Foundation (CNCF). If you are not familiar w[...]



Amazon Web Service AWS September 2017 Software Defined Data Infrastructure Updates

Wed, 11 Oct 2017 00:11:22 -0600

Amazon Web Service AWS September 2017 Software Defined Data Infrastructure Updates Amazon Web Service AWS September 2017 Software Defined Data Infrasture Updates September was a busy month pertaining to software defined data infrastructure including cloud and related AWS announcements. One of the announcements included VMware partnering to deliver vSphere, vSAN and NSX data infrastructure components for creating software defined data centers (SDDC) also known as multi cloud, and hybrid cloud leveraging AWS elastic bare metal servers (read more here in a companion post). Unlike traditional partner software defined solutions that relied on AWS Elastic Cloud Compute (EC2) instances, VMware is being deployed using private bare metal AWS elastic servers. What this means is that VMware vSphere (e.g. ESXi) hypervisor, vCenter, software defined storage (vSAN), storage defined network (NSX) and associated vRealize tools are deployed on AWS data infrastructure that can be used for deploying hybrid software defined data centers (e.g. connecting to your existing VMware environment). Learn more about VMware on AWS here or click on the following image. Additional AWS Updates Amazon Web Services (AWS) updates include, coinciding with VMworld, the initial availability of VMware on AWS (using virtual private servers e.g. think along the lines of Lightsail, not EC2 instances) was announced. Amazon Web Services (AWS) continues its expansion into database and table services with Relational Data Services (RDS) including various engines (Amazon Auora,MariaDB, MySQL, Oracle, PostgreSQL,and SQL Server along with Database Migration Service (DMS). Note that these RDS are in addition to what you can install and run your self on Elastic Cloud Compute (EC2) virtual machine instances, Lambda serverless containers, or Lightsail Virtual Private Servers (VPS). AWS has published a guide to database testing on Amazon RDS for Oracle plotting latency and IOPs for OLTP workloads here using SLOB. If you are not familiar with SLOB (Silly Little Oracle Benchmark) here is a podcast with its creator Kevin Closson discussing database performance and related topics. Learn more about SLOB and step by step installation for AWS RDS Oracle here, and for those who are concerned or think that you can not run workloads to evaluate Oracle platforms, have a look at this here. EC2 enhancements include charging by the second (previous by the hour) for some EC2 instances (see details here including what is or is not currently available) which is a growing trend by private cloud vendors aligning with how serverless containers have been billed. New large memory EC2 instances that for example support up to 3,904GB of DDR4 RAM have been added by AWS. Other EC2 enhancements include updated network performance for some instances, OpenCL development environment to leverage AWS F1 FPGA enabled instances, along with new Elastic GPU enabled instances. Other server and network enhancements include Network Load Balancer for Elastic Load Balancer announced, as well as application load balancer now supports load balancing to IP address as targets for AWS and on premises (e.g. hybrid) resources. Other updates and announces include data prot[...]



Getting Caught Up What Happened To September 2017

Thu, 28 Sep 2017 23:23:23 -0600

Getting Caught Up What Happened To September 2017 Getting Caught Up, What Happened In September? Seems like just yesterday it was the end of August with the start of VMworld in Las Vegas, now its the end of September and Microsoft Ignite in Orlando is wrapping up. Microsoft has made several announcements this week at Ignite including Azure cloud related, AI, IoT, Windows platforms, O365 among others. More about Microsoft Azure, Azure Stack, Windows Server, Hyper-V and related data infrastructure topics in future posts. Like many of you, September is a busy time of the year, so here is a recap of some of what I have been doing for the past month (among other things). VMworld Las Vegas During VMworld US VMware announced enhanced workspace, security and endpoint solutions, Pivotal Container Service (PKS) with Google for Kubernetes serverless container management, DXC partnership for hybrid cloud management, security enablement via its AppDefense solutions, data infrastructure platform enhancements including integrated OpenStack, vRealize management tools, vSAN among others. VMware also made announcements including expanded multi-cloud and hybrid cloud support along with VMware on AWS as well as Dell EMC data protection for VMware and AWS environments. Software Defined Data Infrastructure Essentials (CRC Press) at VMworld bookstore In other VMworld activity, my new book Software Defined Data Infrastructure Essentials (CRC Press) made its public debut in the VMware book store where I did a book signing event. You can get your copy of Software Defined Data Infrastructure Essentials which includes Software Defined Data Centers (SDDC) along with hybrid, multi-cloud, serverless, converged and related topics at Amazon among other venues. Learn more here. Software Defined Everything (x) In early September I was invited to present at the Wipro Software Defined Everything (x) event in New York City. This event follows Wipro invited me to present at in London England this past January at the inaugural SDx Summit event. At the New York City event my presentation was Planning and Enabling Your Journey to SDx which bridged the higher level big picture industry trends to the applied feet on the ground topics. Attendees of the event included customers, prospects, partners, various analyst firms along with Wipro personal. At the Wipro event during a panel discussion a question was asked about definition of software defined. After the usual vendor and industry responses, mine was a simple, put the emphasis on Define as opposed to software, with a focus on what is the resulting outcome. In other words how and what are you defining (e.g. x) which could be storage, server, data center, data infrastructure, network among others to make a particular result, outcome, service or capability. While the emphasis is around defined, that also can mean curate, compose, craft, program or whatever you prefer to create an outcome. Role of Storage in a Software Defined Data Infrastructure At the Storage Network Industry Association (SNIA) Storage Developers Conference (SDC) in Santa Clara I did a talk about the role of Storage in Software Defined Data Infrastructures. T[...]



August 2017 Server StorageIO Data Infrastructure Update Newsletter

Sun, 27 Aug 2017 17:16:15 -0600

Server StorageIO August 2017 Data Infrastructure Update Newsletter [...]



Hot Popular New Trending Data Infrastructure Vendors To Watch

Sat, 26 Aug 2017 23:23:23 -0600

Hot Popular New Trending Data Infrastructure Vendors To Watch A common question I get asked is who are the hot popular new or trending data infrastructure vendors to watch. Keep in mind that there is a difference between industry adoption and customer deployment, the former being what the industry (e.g. Vendors, resellers, integrators, investors, consultants, analyst, press, media, analysts, bloggers or other influences) like, want and need to talk about. Then there is customer adoption and deployment which is what is being bought, installed and used. Some Popular Trending Vendors To Watch The following is far from an exhaustive list however here are some that come to mind that I'm watching. Apcera – Enterprise class containers and management tools AWS – Rolls our new services like a startup with size momentum of a legacy player Blue Medora – Data Infrastructure insight, software defined management Broadcom – Avago/LSI, legacy Broadcom, Emulex, Brocade acquisition interesting portfolio Chelsio – Server, storage and data Infrastructure I/O technologies Commvault - Data protection and backup solutions Compuverde – Software defined storage Data Direct Networks (DDN) – Scale out and high performance storage Datadog – Software defined management, data infrastructure insight, analytics, reporting Datrium – Converged software defined data infrastructure solutions Dell EMC Code – Rexray container persistent storage management Docker – Container and management tools E8 Storage – NVMe based storage solutions Elastifile – Scale out software defined storage and file system Enmotus - MicroTiering that works with Windows, Linux and various cloud platforms Everspin - storage class memories and NVDIMM Excelero – NVMe based storage Hedvig – Scale out software defined storage Huawei – While not common in the US, in Europe and elsewhere they are gaining momentum Intel – Watch what they do with Optane and storage class memories Kubernetes – Container software defined management Liqid – Stealth Colorado startup focusing on PCIe fabrics and composable infrastructure Maxta – Hyper converged infrastructure (HCI) and software defined data infrastructure vendor Mellanox – While not a startup, keep an eye on what they are doing with their adapters Micron – Watch what they do with 3D XPoint storage class memory and SSD Microsoft – Not a startup, however keep an eye on Azure, Azure Stack, Window Server with S2D, ReFS, tiering, CI/HCI as well as Linux services on Windows. Minio – Software defined storage solutions NetApp – While FAS/Ontap and Solidfire get the headlines, E series generates revenue, keep an eye on StorageGrid and AltaVault Neuvector – Container management and security Noobaa – Software defined storage and more NVIDA – No longer just another graphics process unit based company Pivot3 – An[...]



Travel Fun Crossword Puzzle For VMworld 2017 Las Vegas

Thu, 24 Aug 2017 18:18:18 -0600

Travel Fun Crossword Puzzle For VMworld 2017 Las Vegas Some of you may be traveling to VMworld 2017 in Las Vegas next week to sharpen, expand, refresh or share your VMware and data infrastructure tradecraft (skills, experiences, expertise, knowledge). Here is something fun to sharpen your VMware skills while traveling. Most of these should be pretty easy meaning that you do not have to be a Unicorn, full of vCertifications, vCredentials or a 9, 8, 7, 6, 5, 4, 3, 2 or 1st time vExpert or top 100 vBlogger. However if you need the answers they are below. Note that you can also click here to get a PDF version that is larger (or click on the image) that also has the answers. For those of you who will be in Las Vegas at VMworld next week, stop by the VMworld Book Store at 1PM on Tuesday (the 29th) where I will be doing a book signing event for my new book Software Defined Data Infrastructure Essentials (CRC Press), stop by and say hello. Note there are also Kindle and other electronic versions of my new SDDI Essentials Book on Amazon.com and other venues if you need something to read during your upcoming travels. Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. Whats a data infrastructure? (Via NetworkWorld) Software Defined Data Infrastructure Essentials (CRC Press) book companion page (includes various images, sample figures and added content) Object and Cloud Storage Center (www.objectstoragecenter.com) www.thessdplace.com - NVM, flash, SSD, SCM and related topics www.thenvmeplace.com - NVM Express (NVMe) related topics Answers to the crossword puzzle (PDF) VMworld 2017 site Various IT and Cloud Infrastructure Layers including Data Infrastructures What This All Means Have a safe and fun trip on your way to Las Vegas for next weeks VMworld, enjoy the crossword puzzle, and if you need the answers, they are located here (PDF), see you at VMworld 2017 in Last Vegas. Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2018 Server StorageIO(R) and UnlimitedIO All Rights Reserved [...]



Announcing Software Defined Data Infrastructure Essentials Book by Greg Schulz

Wed, 23 Aug 2017 19:18:17 -0600

New SDDI Essentials Book by Greg Schulz of Server StorageIO SDDC, Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft Over the past several months I have posted, commenting, presenting and discussing more about Data Infrastructures and my new book (my 4th solo project) officially announced today, Software Defined Data Infrastructure Essentials (CRC Press). Software Defined Data Infrastructure (SDDI) Essentials is now generally available at various global venues in hardcopy, hardback print as well as various electronic versions including via Amazon and CRC Press among others. For those attending VMworld 2017 in Las Vegas, I will be doing a book signing, meet and greet at 1PM Tuesday August 29 in the VMworld book store, as well as presenting at various other fall industry events. Software Defined Data Infrastructure (SDDI) Announcement (Via Businesswire) Stillwater, Minnesota – August 23, 2017  – Server StorageIO, a leading independent IT industry advisory and consultancy firm, in conjunction with publisher CRC Press, a Taylor and Francis imprint, announced the release and general availability of “Software-Defined Data Infrastructure Essentials,” a new book by Greg Schulz, noted author and Server StorageIO founder. The Software Defined Data Infrastructure Essentials book covers physical, cloud, converged (and hyper-converged), container, and virtual server storage I/O networking technologies, revealing trends, tools, techniques, and tradecraft skills. Various IT and Cloud Infrastructure Layers including Data Infrastructures From cloud web scale to enterprise and small environments, IoT to database, software-defined data center (SDDC) to converged and container servers, flash solid state devices (SSD) to storage and I/O networking,, the book helps develop or refine hardware, software, services and management experiences, providing real-world examples for those involved with or looking to expand their data infrastructure education knowledge and tradecraft skills. Software Defined Data Infrastructure Essentials book topics include: Cloud, Converged, Container, and Virtual Server Storage I/O networking Data protection (archive, availability, backup, BC/DR, snapshot, security) Block, file, object, structured, unstructured and data value Analytics, monitoring, reporting, and management metrics Industry trends, tools, techniques, decision making Local, remote server, storage and network I/O troubleshooting Performance, availability, capacity and  economics (PACE) Where To Purchase Your Copy Order via Amazon.com and CRC Press along with Google Books among other global venues. What People Are Saying About Software Defined Data Infrastructure Essentials Book “From CIOs to operations, sales to engineering, this book is a comprehensive reference, a must-read for IT infrastructure professionals, beginners to seasoned experts,” said Tom Becchetti, advisory systems engineer. "We had a front row seat watch[...]



Chelsio Storage over IP and other Networks Enable Data Infrastructures

Fri, 18 Aug 2017 18:17:16 -0600

Chelsio and Storage over IP (SoIP) continue to enable Data Infrastructures from legacy to software defined virtual, container, cloud as well as converged. This past week I had a chance to visit with Chelsio to discuss data infrastructures, server storage I/O networking along with other related topics. More on Chelsio later in this post, however, for now lets take a quick step back and refresh what is SoIP (Storage over IP) along with Storage over Ethernet (among other networks). Various IT and Cloud Infrastructure Layers including Data Infrastructures Server Storage over IP Revisited There are many variations of SoIP from network attached storage (NAS) file based processing including NFS, SAMBA/SMB (aka Windows File sharing) among others. In addition there is various block such as SCSI over IP (e.g. iSCSI), along with object via HTTP/HTTPS, not to mention the buzzword bingo list of RoCE, iSER, iWARP, RDMA, DDPK, FTP, FCoE, IFCP, and SMB3 direct to name a few. Who is Chelsio For those who are not aware or need a refresher, Chelsio is involved with enabling server storage I/O by creating ASICs (Application Specific Integrated Circuits) that do various functions offloading those from the host server processor. What this means for some is a throw back to the early 2000s of the TCP Offload Engine (TOE) era where various processing to handle regular along with iSCSI and other storage over Ethernet and IP could be accelerated. Chelsio ecosystem across different data infrastructure focus areas and application workloads As seen in the image above, certainly there is a server and storage I/O network play with Chelsio, along with traffic management, packet inspection, security (encryption, SSL and other offload), traditional, commercial, web, high performance compute (HPC) along with high profit or productivity compute (the other HPC). Chelsio also enables data infrastructures that are part of physical bare metal (BM), software defined virtual, container, cloud, serverless among others. The above image shows how Chelsio enables initiators on server and storage appliances as well as targets via various storage over IP (or Ethernet) protocols. Chelsio also plays in several different sectors from *NIX to Windows, Cloud to Containers, Various processor architectures and hypervisors. Besides diverse server storage I/O enabling capabilities across various data infrastructure environments, what caught my eye with Chelsio is how far they, and storage over IP have progressed over the past decade (or more). Granted there are faster underlying networks today, however the offload and specialized chip sets (e.g. ASICs) have also progressed as seen in the above and next series of images via Chelsio. The above showing TCP and UDP acceleration, the following show Microsoft SMB 3.1.1 performance something important for doing Storage Spaces Direct (S2D) and Windows-based Converged Infrastructure (CI) along with Hyper Converged Infrastructures (HCI) deployments. Something else that caught my eye was iSCSI performance[...]



Microsoft Azure Cloud Software Defined Data Infrastructure Reference Architecture Resources

Thu, 17 Aug 2017 23:23:23 -0600

Microsoft Azure Cloud Software Defined Data Infrastructure Reference Architecture Resources Need to learn more about Microsoft Azure Cloud Software Defined Data Infrastructure topics including reference architecture among other resources for various application workloads? Microsoft Azure has an architecture and resources page (here) that includes various application workload reference tools. Azure Reference Architectures via Microsoft Azure Examples of some Azure Reference Architecture for various application and workloads include among others: Sharepoint High Availability (HA) Windows Virtual Machines (standalone, availability and other options) Linux Virtual Machines (standalone, availability and other options) Managed Web Applications including Multi Region SAP Deployments For example, need to know how to configure a high availability (HA) Sharepoint deployment with Azure, then check out this reference architecture shown below. Sharepoint HA via Microsoft Azure Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. Cloud Storage Decision Making and and more here Microsoft Windows Server, Azure, Nano Life cycle Updates Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 Overview Review of Microsoft ReFS (Reliable File System) and resource links Dell EMC Announce Azure Stack Hybrid Cloud Solution Azure Stack Technical Preview 3 (TP3) Overview Preview Review Microsoft Azure Architecture and Reference Resources NVMe related and flash SSD along with cloud, bulk, object storage topics Software Defined Data Infrastructure Essentials (CRC Press). Various IT and Cloud Infrastructure Layers including Data Infrastructures What This All Means Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. Software Defined Data Infrastructures span legacy, virtual, container, cloud and other environments to support various application workloads. Check out the Microsoft Azure cloud reference architecture and resources mentioned above as well as the Azure Free trial and getting started site here. Ok, nuff said, for now. Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2018 Server StorageIO(R) and UnlimitedIO All Rights Reserved [...]



Like Data They Protect For Now Quantum Revenues Continue To Grow

Mon, 14 Aug 2017 14:13:12 -0600

For Now Quantum Revenues Continue To Grow For Now Quantum Revenues Continue To Grow. The other day following their formal announced, I received an summary update from Quantum pertaining to their recent Q1 Results (show later below). Various IT and Cloud Infrastructure Layers including Data Infrastructures Quantums Revenues Continue To Grow Like Data One of the certainties in life is change and the other is continued growth in data that gets transformed into information via IT and other applications. Data Infrastructures fundamental role is to enable an environment for applications and data to be transformed into information and delivered as services. In other words, Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. Quantums role is to provide solutions and technologies for enabling legacy and cloud or other software defined data infrastructures to protect, preserve, secure and serve data. What caught my eye in Quantums announcements was that while not earth shattering growth numbers normally associated with a hot startup, being a legacy data infrasture and storage vendor, Quantum's numbers are hanging in there. At a time when some legacy as well as startups struggle with increased competition from others including cloud, Quantum appears for at least now to be hanging in there with some gains. The other thing that caught my eye is that most of the growth not surprisingly is non tape related solutions, particular around their bulk scale out StorNext storage solutions, there is some growth in tape. Here is the excerpt of what Quantum sent out: Highlights for the quarter (all comparisons are to the same period a year ago): • Grew total revenue and generated profit for 5th consecutive quarter • Total revenue was up slightly to $117M, with 3% increase in branded revenue • Generated operating profit of $1M with earnings per share of 4 cents, up 2 cents • Grew scale-out tiered storage revenue 10% to $34M, with strong growth in video surveillance and technical workflows o Key surveillance wins included deals with an Asian government for surveillance at a presidential palace and other government facilities, with a major U.S. port and with four new police department customers o Established several new surveillance partnerships – one of top three resellers/integrators in China (Uniview) and two major U.S. integrators (Protection 1 and Kratos) o Won two surveillance awards for StorNext – Security Industry Association’s New Product Showcase award and Security Today magazine’s Platinum Govies Government Security award o Key technical workflow wins included deals at an international defense and aerospace company to expand StorNext archive environment, a leading biotechnology firm for 1 PB genomic sequencing archive, a top automaker involving autonomous driving research data and a U.S. technology institute involving high performance computing [...]



Server StorageIO Industry Trends Perspectives Report WekaIO Matrix

Sat, 12 Aug 2017 15:14:13 -0600

Server StorageIO Industry Trends Perspectives Report WekaIO Matrix

WekaIO is a scale out software defined storage startup vendor whose solution is Matrix.

(image)

This Server StorageIO Industry Trends Perspective report looks at common issues, trends, and how to address different application server storage I/O challenges. In this report, we look at WekaIO Matrix, an elastic, flexible, highly scalable easy to use (and manage) software defined (e.g. software based) storage solution. WekaIO Matrix enables flexible elastic scaling with stability and without compromise.

Matrix is a new scale out software defined storage solution that:
  • Installs on bare metal, virtual or cloud servers
  • Has POSIX, NFS, SMB, and HDFS storage access
  • Adaptable performance for little and big data
  • Tiering of flash SSD and cloud object storage
  • Distributed resilience without compromise
  • Removes complexity of traditional storage
  • Deploys on bare metal, virtual and cloud environments

Read more in this (free, no registration required) Server StorageIO Industry Trends Perspective (ITP) WekaIO Matrix Report compliments of WekaIO.

Ok, nuff said, for now.
Gs

Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2018 Server StorageIO(R) and UnlimitedIO All Rights Reserved




Like IT Data Centers Do You Take Trade Show Exhibit Infrastructure For Granted?

Fri, 11 Aug 2017 16:15:14 -0600

Do You Take Trade Show Exhibit Infrastructure For Granted? Think about this for a moment; do you assume that Information Technology (IT) and Cloud based data centers along with their associated Data Infrastructure supporting various applications will be accessible when needed. Likewise, when you go to a trade show, conference, symposium, user group or another conclave is it assumed that the trade show, exposition (expo), exhibits, booths, stands or demo areas will be ready, waiting and accessible? Fire Disrupts Flash Memory Summit Conference Exhibits This past week at the Flash Memory Summit (FMS) conference trade show event in Santa Clara California, what normally would be taken for granted (e.g. expo hall and exhibits) were disrupted. The disruption (more here and here) was caused by an early morning fire in one of the exhibitor’s booths (stand) in the expo hall (view some photos here via Toms Hardware.com). Fortunately, nobody was hurt, at least physically, and damage (physically) appears to have been isolated. However while the key notes, panels, and other presentations did take place as part of the show must go on, the popular exhibit expo hall did not. Granted for some people who only attend conferences or seminar events for the presentation content, lack of the exhibition hall simply meant no free giveaways. On the other hand, for those who attend events like FMS mainly for the exhibition hall experience, the show did not go on, perhaps resulting in a trip in vain (e.g. how you might be able to recoup some travel costs in some scenarios) for some people. For example, those who were attending to meet with a particular vendor, see a product technology, conduct some business or other meetings, do an interview, video, podcast, take some photos, or simply get some free stuff were disrupted. Likewise those behind the scenes, from conference organizers, event staff not to mention the vendor’s sponsors who put resources (time, money, people, and equipment) into an exhibit were disrupted. Vendors were still able to issue their press releases and conduct their presentations, keynotes, panel discussions, however what about the lack of the expo. Do We Take IT Data Centers, Data Infrastructures and Event Infrastructures For Granted This begs the question of if trade show exhibits still have value, or can an event function without one? I am not sure as some events can and do stand on their merit with presentation content being the primary focus, others the expo is the draw, many are hybrid with a mix of both. A question and point of this piece is that how many people take conferences in general, and exhibits along with their associated Infrastructure for granted? How many know or understand the amount of time, money, people resources and various tradecraft skills across different disciplines go into event planning, staging, coordination, the execution, so they occ[...]



NVMe Wont Replace Flash By Itself They Complement Each Other

Wed, 9 Aug 2017 17:16:15 -0600

NVMe Wont Replace Flash By Itself They Complement Each Other Various Solid State Devices (SSD) including NVMe, SAS, SATA, USB, M.2 There has been some recent industry marketing buzz generated by a startup to get some attention by claiming via a study sponsored by including the startup that Non-Volatile Memory (NVM) Express (NVMe) will replace flash storage. Granted, many IT customers as well as vendors are still confused by NVMe thinking it is a storage medium as opposed to an interface used for accessing fast storage devices such as nand flash among other solid state devices (SSDs). Part of that confusion can be tied to common SSD based devices rely on NVM that are persistent memory retaining data when powered off (unlike the memory in your computer). Instead of saying NVMe will mean the demise of flash, what should or could be said however some might be scared to say it is that other interfaces and protocols such as SAS (Serial Attached SCSI), AHCI/SATA, mSATA, Fibre Channel SCSI Protocol aka FCP aka simply Fibre Channel (FC), iSCSI and others are what can be replaced by NVMe. NVMe is simply the path or roadway along with traffic rules for getting from point a (such as a server) to point b (some storage device or medium e.g. flash SSD). The storage medium is where data is stored such as magnetic for Hard Disk Drive (HDD) or tape, nand flash, 3D XPoint, Optane among others. NVMe and NVM including flash are better together The simple quick get to the point is that NVMe (e.g. Non Volatile Memory aka NVM Express [NVMe]) is an interface protocol (like SAS/SATA/iSCSI among others) used for communicating with various nonvolatile memory (NVM) and solid state device (SSDs). NVMe is how data gets moved between a computer or other system and the NVM persistent memory such as nand flash, 3D XPoint, Spintorque or other storage class memories (SCM). In other words, the only thing NVMe will, should, might or could kill off would be the use of some other interface such as SAS, SATA/AHCI, Fibre Channel, iSCSI along with propritary driver or protocols. On the other hand, given the extensibility of NVMe and how it can be used in different configurations including as part of fabrics, it is an enabler for various NVMs also known as persistent memories, SCMs, SSDs including those based on NAND flash as well as emerging 3D XPoint (or Intel version) among others. Where To Learn More Learn more about related technology, trends, tools, techniques, and tips with the following links. If Answer is NVMe, then what are (or were) the questions? HDD topics and what to use for bulk and content storage NVMe related and flash SSD along with cloud, bulk, object storage topics Software Defined Data Infrastructure Essentials (CRC Press) that provides mover coverage of NVM and NVMe among other topics. www.thessdplace.com and www.thenvmeplace.com Get in the NVMe SSD game (if you are [...]






Zombie Technology Life after Death Tape Is Still Alive

Mon, 24 Jul 2017 18:17:16 -0600

Zombie Technology Life after Death Tape Is Still Alive A Zombie Technology is one declared dead yet has Life after Death such as Tape which is still alive. Image via StorageIO.com (licensed for use from Shutterstock.com) Tapes Evolving Role Sure we have heard for decade’s about the death of tape, and someday it will be dead and buried (I mean really dead), no longer used, buried, existing only in museums. Granted tape has been on the decline for some time, and even with many vendors exiting the marketplace, there remains continued development and demand within various data infrastructure environments, including software defined as well as legacy. Tape remains viable for some environments as part of an overall memory data storage hierarchy including as a portability (transportable) as well as bulk storage medium. Keep in mind that tapes role as a data storage medium also continues to change as does its location. The following table (via Software Defined Data Infrastructure Essentials (CRC Press)) Chapter 10 shows examples of various data movements from source to destination. These movements include migration, replication, clones, mirroring, and backup, copies, among others. The source device can be a block LUN, volume, partition, physical or virtual drive, HDD or SSD, as well as a file system, object, or blob container or bucket. An example of the modes in Table 10.1 include D2D backup from local to local (or remote) disk (HDD or SSD) storage or D2D2D copy from local to local storage, then to the remote. Mode - Description D2D - Data gets copied (moved, migrated, replicated, cloned, backed up) from source storage (HDD or SSD) to another device or disk (HDD or SSD)-based device D2C - Data gets copied from a source device to a cloud device. D2T - Data gets copied from a source device to a tape device (drive or library). D2D2D - Data gets copied from a source device to another device, and then to another device. D2D2T - Data gets copied from a source device to another device, then to tape. D2D2C   Data gets copied from a source device to another device, then to cloud. Data Movement Modes from Source to Destination Note that movement from source to the target can be a copy, rsync, backup, replicate, snapshot, clone, robocopy among many other actions. Also, note that in the earlier examples there are occurrences of tape existing in clouds (e.g. its place) and use changing.  Tip - In the past, “disk” usually referred to HDD. Today, however, it can also mean SSD. Think of D2D as not being just HDD to HDD, as it can also be SSD to SSD, Flash to Flash (F2F), or S2S among many other variations if you prefer (or needed). Image via Tapestorage.org For those still interested in tape, check out the Active Archive Alliance recent posts (here), as wel[...]



Who Will Be At Top Of Storage World Next Decade?

Sat, 22 Jul 2017 19:18:17 -0600

Data storage regardless of if hardware, legacy, new, emerging, cloud service or various software defined storage (SDS) approaches are all fundamental resource components of data infrastructures along with compute server, I/O networking as well as management tools, techniques, processes and procedures. Fundamental Data Infrastructure resources Data infrastructures include legacy along with software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity). Data Infrastructure and other IT Layers (stacks and altitude levels) Various data infrastructures resource components spanning server, storage, I/O networks, tools along with hardware, software, services get defined as well as composed into solutions or services which may in turn be further aggregated into more extensive higher altitude offerings (e.g. further up the stack). Various IT and Data Infrastructure Stack Layers (Altitude Levels) Focus on Data Storage Present and Future Predictions Drew Robb (@Robbdrew) has a good piece over at Enterprise Storage Forum looking at the past, present and future of who will rule the data storage world that includes several perspective predictions comments from myself as well as others. Some of the perspectives and predictions by others are more generic and technology trend and buzzword bingo focus which should not be a surprise. For example including the usual performance, Cloud and Object Storage, DPDK, RDMA/RoCE, Software-Defined, NVM/Flash/SSD, CI/HCI, NVMe among others. Here are some excerpts from Drews piece along with my perspective and prediction comments of who may rule the data storage roost in a decade: Amazon Web Services (AWS) – AWS includes cloud and object storage in the form of S3. However, there is more to storage than object and S3 with AWS also having Elastic File Services (EFS), Elastic Block Storage (EBS), database, message queue and on-instance storage, among others. for traditional, emerging and storage for the Internet of Things (IoT). It is difficult to think of AWS not being a major player in a decade unless they totally screw up their execution in the future. Granted, some of their competitors might be working overtime putting pins and needles into Voodoo Dolls (perhaps bought via Amazon.com) while wishing for the demise of Amazon Web Services, just saying. Voodoo Dolls and image via Amazon.com Of course, Amazon and AWS could follow the likes of Sears (e.g. some may remember their catalog) and ignore the future ending up on the where are they now list. While talking about Amazon and AWS, one will have to wonder where Wall Mart will end up in a decade with or without a cloud of their own? Microsoft – Wi[...]



New family of Intel Xeon Scalable Processors enable software defined data infrastructures (SDDI) and SDDC

Tue, 11 Jul 2017 21:21:21 -0600

Today Intel announced a new family of Xeon Scalable Processors (aka Purely) that for some workloads Intel claims to be on average of 1.65x faster than their predecessors. Note your real improvement will vary based on workload, configuration, benchmark testing, type of processor, memory, and many other server storage I/O performance considerations. Image via Intel.com In general the new Intel Xeon Scalable Processors enable legacy and software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity). Some target application and environment workloads Intel is positioning these new processors for includes among others: Machine Learning (ML), Artificial Intelligence (AI), advanced analytics, deep learning and big data Networking including software defined network (SDN) and network function virtualization (NFV) Cloud and Virtualization including Azure Stack, Docker and Kubernetes containers, Hyper-V, KVM, OpenStack VMware vSphere, KVM among others High Performance Compute (HPC) and High Productivity Compute (e.g. the other HPC) Storage including legacy and emerging software defined storage software deployed as appliances, systems or server less deployment modes. Features of the new Intel Xeon Scalable Processors include: New core micro architecture with interconnects and on die memory controllers Sockets (processors) scalable up to 28 cores Improved networking performance using Quick Assist and Data Plane Development Kit (DPDK) Leverages Intel Quick Assist Technology for CPU offload of compute intensive functions including I/O networking, security, AI, ML, big data, analytics and storage functions. Functions that benefit from Quick Assist include cryptography, encryption, authentication, cipher operations, digital signatures, key exchange, loss less data compression and data footprint reduction along with data at rest encryption (DARE). Optane Non-Volatile Dual Inline Memory Module (NVDIMM) for storage class memory (SCM) also referred to by some as Persistent Memory (PM), not to be confused with Physical Machine (PM). Supports Advanced Vector Extensions 512  (AVX-512) for HPC and other workloads Optional Omni-Path Fabrics in addition to 1/10Gb Ethernet among other I/O options Six memory channels supporting up to 6TB of RDIMM with multi socket systems From two to eight  sockets per node (system) Systems support PCIe 3.x (some supporting x4 based M.2 interconnects) Note that exact speeds, feeds, slots and watts will vary by specific server model and vendor options. Also note that some server system solutions have two or mo[...]






GDPR (General Data Protection Regulation) Resources Are You Ready?

Tue, 20 Jun 2017 17:11:17 -0600

The new European General Data Protection Regulation (GDPR) go into effect in a year on May 25 2018 are you ready? What Is GDPR If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought. While it is possible that some organizations may not be affected by GDPR in Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that tended to focus on specific industries such as healthcare (HIPPA and HITECH) or financial (SARBOX, Dodd/Frank among others), these new regulations can be more far-reaching. Where To Learn More GDPR goes into effect May 25 2018 Are You Ready? GDPR Compliance Planning for Microsoft Environments (Webinar) Quest GDPR Resources Quest GDPR Compliance resources bMicrosoft and Azure Cloud GDPR Resources How Microsoft Azure Can Help Organizations Become Compliant with the EU GDPR (Microsoft PDF White Paper) GDPR Questions? Azure has answers (Azure Microsoft) ) Microsoft and GDPR Trustcenter Resources (Microsoft) Get GDPR compliant with the Microsoft Cloud (Blogs Microsoft) Earning your trust with contractual commitments to the General Data Protection Regulation (Blogs Microsoft) Microsoft enterprise products and services and the GDPR (Microsoft) Where to Start with GDPR and Microsoft (Microsoft) Microsoft related GDPR FAQ (Microsoft) Do you have or know of relevant GDPR information and resources? Feel free to add them via comments or send us an email, however please watch the spam and sales pitches as they will be moderated. What This All Means Now is the time to start planning, preparing for GDPR if you have not done so and need to, as well as becoming more generally aware of it and other initiatives. One of the key takeaways is that while the word compliance is involved, there is much more to GDPR than just compliance as we have seen in the part. With GDPR and other initiatives data protection becomes the focus including privacy, protect, preserve, secure, serve as well as manage, have insight, awareness along with associated reporting. Ok, nuff said (for now...). Cheers Gs Greg Schulz - Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press). All Comments, (C) and (TM) belong to their own[...]



Microsoft Windows Server, Azure, Nano Life cycle Updates

Thu, 15 Jun 2017 18:17:16 -0600

Microsoft Windows Server, Azure, Nano and life cycle Updates For those of you who have an interest in Microsoft Windows Server on-premise, on Azure, on Hyper-V or Nano life cycle here are some recently announced updates. Microsoft has announced updates to Windows Server Core and Nano along with semi-annual channel updates (read more here). The synopsis of this new update via Microsoft (read more here) is: In this new model, Windows Server releases are identified by the year and month of release: for example, in 2017, a release in the 9th month (September) would be identified as version 1709. Windows Server will release semi-annually in fall and spring. Another release in March 2018 would be version 1803. The support lifecycle for each release is 18 months. Microsoft has announced that its lightweight variant of WIndows Server 2016 (if you need a refresh on server requirements visit here) known as nano will now be focused for WIndows based containers as opposed to being for bare metal. As part of this change, Microsoft has reiterated that Server Core the headless (aka non-desktop user interface) version of WIndows Server 2016 will continue as the platform for BM along with other deployments where a GUI interface is not needed. Note that one of the original premises of Nano was that it could be leveraged as a replacement for Server Core. As part of this shift, Microsoft has also stated their intention to further streamline the already slimmed down version of WIndows Server known as Nano by reducing its size another 50%. Keep in mind that Nano is already a fraction of the footprint size of regular Windows Server (Core or Desktop UI). The footprint of Nano includes both its capacity size on disk (HDD or SSD), as well as its memory requirements, speed of startup boot, along with number of components that cut the number of updates. By focusing Nano for container use (e.g. Windows containers) Microsoft is providing multiple micro services engines (e.g. Linux and Windows) along with various management including Docker. Similar to providing multiple container engines (e.g. Linux and Windows) Microsoft is also supporting management from Windows along with Unix. Does This Confirm Rumor FUD that Nano is Dead IMHO the answer to the FUD rumors that are circulating around that NANO is dead are false. Granted Nano is being refocused by Microsoft for containers and will not be the lightweight headless Windows Server 2016 replacement for Server Core. Instead, the Microsoft focus is two path with continued enhancements on Server Core for headless full Windows Server 2016 deployment, while Nano gets further streamlined for containers. This means that Nano is no longer bare metal o[...]



AWS S3 Storage Gateway Revisited (Part I)

Wed, 14 Jun 2017 23:13:03 -0600

AWS S3 Storage Gateway Revisited (how to avoid an install error) This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it's called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use. If you need an AWS primer and overview of various services such as Elastic Cloud Compute (EC2), Elastic Block Storage (EBS), Elastic File Service (EFS), Simple Storage Service (S3), Availability Zones (AZ), Regions and other items check this multi-part series (Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) ). As a quick refresher, S3 is the AWS bulk, high-capacity unstructured and object storage service along with its companion deep cold (e.g. inactive) Glacier. There are various S3 storage service classes including standard, reduced redundancy storage (RRS) along with infrequent access (IA) that have different availability durability, performance, service level and cost attributes. Note that S3 IA is not Glacier as your data always remains on-line accessible while Glacier data can be off-line. AWS S3 can be accessed via its API, as well as via HTTP rest calls, AWS tools along with those from third-party's. Third party tools include NAS file access such as S3FS for Linux that I use for my Ubuntu systems to mount S3 buckets and use similar to other mount points. Other tools include Cloudberry, S3 Motion, S3 Browser as well as plug-ins available in most data protection (backup, snapshot, archive) software tools and storage systems today. AWS S3 Storage Gateway and What's New The Storage Gateway is the AWS tool that you can use for accessing S3 buckets and objects via your block volume, NAS file or tape based applications. The Storage Gateway is intended to give S3 bucket and object access to on-premise applications and data infrastructures functions including data protection (backup/restore, business continuance (BC), business resiliency (BR), disaster recovery (DR) and archiving), along with storage tiering to cloud. Some of the things that have evolved with the S3 Storage Gateway include: Easier, streamlined download, installation, deployment Enhanced Virtual Tape Library (VTL) and Virtual Tape support File serving and sharing (not to be confused with Elastic File Services (EFS)) Ability to define your own bucket and associated parameters Bucket options including Infrequent Access (IA) or standard Options for AWS EC2 hosted, or o[...]



Part II Revisting AWS S3 Storage Gateway (Test Drive Deployment)

Wed, 14 Jun 2017 23:13:03 -0600

Part II Revisiting AWS S3 Storage Gateway (Test Drive Deployment) This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it's called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use. What About Storage Gateway Costs? Costs vary by region, type of storage being used (files stored in S3, Volume Storage, EBS Snapshots, Virtual Tape storage, Virtual Tape storage archive), as well as type of gateway host, along with how access and used. Request pricing varies including data written to AWS storage by gateway (up to maximum of $125.00 per month), snapshot/volume delete, virtual tape delete, (prorate fee for deletes within 90 days of being archived), virtual tape archival, virtual tape retrieval. Note that there are also various data transfer fees that also vary by region and gateway host. Learn more about pricing here. What Are Some Storage Gateway Alternatives AWS and S3 storage gateway access alternatives include those from various third-party (including that are in the AWS marketplace), as well as via data protection tools (e.g. backup/restore, archive, snapshot, replication) and more commonly storage systems. Some tools include Cloudberry, S3FS, S3 motion, S3 Browser among many others. Tip is when a vendor says they support S3, ask them if that is for their back-end (e.g. they can access and store data in S3), or front-end (e.g. they can be accessed by applications that speak S3 API). Also explore what format the application, tool or storage system stores data in AWS storage, for example, are files mapped one to one to S3 objects along with corresponding directory hierarchy, or are they stored in a save set or other entity. AWS Storage Gateway Deployment and Management Tips Once you have created your AWS account (if you did not already have one) and logging into the AWS console (note the link defaults to US East 1 Region), go to the AWS Services Dashboard and select Storage Gateway (or click here which goes to US East 1). You will be presented with three options (File, Volume or VTL) modes. What Does Storage Gateway and Install Look Like The following is what installing a AWS Storage Gateway for file and then volume looks like. First, access the AWS Storage Gateway main landing page (it might change by time you read this) to get started. Scroll down and click on the Get Started with AWS Storage Gateway button or click here. [...]



Top vBlog 2017 Voting Now Open

Wed, 7 Jun 2017 19:18:17 -0600

Top vBlog 2017 Voting Now Open It is that time of the year again when Eric Siebert (@ericsiebert) over at vSphere-land holds his annual Top vBlog (e.g. VMware and Virtualization related) voting (vote here until June 30, 2017). The annual Top vBlog event enables fans to vote for their favorite blogs (to get them into the top 10, 25, 50 and 100) as well as rank them for different categories which appear on Eric's vLaunchPad site. This years Top vBlog voting is sponsored by TurboNomic (e.g. formerly known as VMturbo) who if you are not aware of, have some interesting technology for cross-platform (cloud, container, virtualization, hardware, software, services) data infrastructure management software tools. The blogs and sites listed on Eric's site have common theme linkage to Virtualization and in particular tend to be more VMware focused, however some are also hybrid agnostic spanning other technologies, vendors, services and tools. Some examples of the different focus areas include hypervisors, VDI, cloud, containers, management tools, scripting, networking, servers, storage, data protection including backup/restore, replication, BC, DR among others). In addition to the main list of blogs (that are active), there are also sub lists for different categories including: Top 100 (Also top 10, 25, 50) vBlogs Archive of retired (e.g. not active or seldom post) News and Information sites Podcasts Scripting Blogs Storage related Various Virtualization Blogs VMware Corporate Blogs What To Do Get out and vote for your favorite (or blogs that you frequent) in appreciation to those who create virtualization, VMware and data infrastructure related content. Click here or on the image above to reach the voting survey site where you will find more information and rules. In summary, select 12 of your favorite or preferred blogs, then rank them from 1 (most favorite) to 12. Then select your favorites for other categories such as Female Blog, Independent, New Blog, News websites, Podcast, Scripting and Storage among others. Note: You will find my StorageIOblog in the main category (e.g. where you select 12 and then rank), as well as in the Storage, Independent, as well as Podcast categories, and thank you in advance for your continued support. Which Blogs Do I Recommend (Among Others) Two of my favorite blogs (and authors) are not included as Duncan Epping (Yellow Bricks) former #1 and Frank Denneman former #4 chose not to take part this year opening the door for some others to move up into the top 10 (or 25, 50 and 100). Of those listed some of my blogs I find valuable include Cormac Hogan of [...]