Subscribe: Dana Gardner's BriefingsDirect
http://briefingsdirectblog.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Dana Gardner's BriefingsDirect

Dana Gardner's BriefingsDirect



Analysis and insights for enterprise IT, hybrid cloud and digital transformation strategists.



Updated: 2018-01-19T07:42:27.361-05:00

 



Infatuation leads to love—How container orchestration and federation enables multi-cloud competition

2018-01-11T11:33:43.836-05:00

The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way.And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts.This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions. Here are some excerpts:Gardner: Nic, HashiCorp has gone a long way to enable multi-cloud provisioning. What are some of the trends now driving the need for multi-cloud? And how does container management and orchestration fit into the goal of obtaining functional multi-cloud use, or even interoperability?Jackson: What we see mainly from our enterprise customers is that people are looking for a number of different ways so that they don’t get locked into one particular cloud provider. They are looking for high-availability and redundancy across cloud providers. They are looking for a migration path from private cloud to a public cloud. Or they want a burstable capacity, which means that they can take that private cloud and burst it out into public cloud, if need be.Containers -- and orchestration platforms like Kubernetes, Nomad and Swarm -- are providing standard interfaces to developers. So once you have the platform set up, the running of an application can be mostly cloud-agnostic.Gardner: There’s a growing need for container management and orchestration for not only cloud-agnostic development, but potentially as a greasing of the skids, if you will, to a multi-cloud world.Harbin: Yes. If you make the investment now to architect and package your applications with containers and intelligent orchestration, you will have much better agility to move your application across cloud providers.This will also enable you to quickly leverage any new products on any cloud provider.  For example DigitalOcean recently upgraded our High CPU Droplet plans, providing some of the best values for accessing the latest chipsets from Intel. For users with containerized applications and orchestration, they could easily improve application performance by moving workloads over to that new product.Gardner: And, Matt, at StackPointCloud you have created a universal control plane for Kubernetes. How does that help in terms of ease of deployment choice and multi-cloud use?Ease-of-use increases flexibilityBaldwin: We’ve basically built a management control plane for Kubernetes that gives you a single pane of glass across all your cloud providers. We deal with the top four, so Amazon, Microsoft Azure, Google and DigitalOcean. Because we provide that single pane of glass, you can build the clusters you need with those providers and you can stand up federation.BaldwinIn Kubernetes, multi-cloud is done via that federation. The federation control plane connects all of those clusters together. We are also managing workloads to balance workloads across, say, some on Amazon Web Services (AWS) and some on DigitalOcean, if you l[...]



How a large Missouri medical center developed an agile healthcare infrastructure security strategy

2018-01-08T16:17:02.436-05:00

Healthcare provider organizations are among the most challenging environments to develop and implement comprehensive and agile security infrastructures.These providers of healthcare are usually sprawling campuses with large ecosystems of practitioners, suppliers, and patient-facing facilities. They also operate under stringent compliance requirements, with data privacy as a top priority.At the same time, large hospitals and their extended communities are seeking to become more patient outcome-focused as they deliver ease-of-use, the best applications, as well as up-to-date data analysis to their staffs and physicians.The next BriefingsDirectsecurity insights discussion examines how a large Missouri medical center developed a comprehensive healthcare infrastructure security strategy from the edge to the data center -- and everything in between.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.To learn how healthcare security can become more standardized and proactive with unified management and lower total costs, BriefingsDirect sat down with Phillip Yarbro, Network and Systems Engineer at Saint Francis Healthcare System in Cape Girardeau, Missouri. The discussion was moderated by Dana Gardner, principal analyst at Interarbor Solutions.Here are some excerpts:Gardner: When it comes to security nowadays, Phil, there’s a lot less chunking it out, of focusing on just devices or networks separately or on data centers alone. It seems that security needs to be deployed holistically -- or at least strategically – with standardized solutions, focused on across-the-board levels of coverage.Tell us how you’ve been able to elevate security to that strategic level at Saint Francis Healthcare System.  YarbroYarbro: As a healthcare organization, we have a wide variety of systems -- from our electronic medical records (EMR) that we are currently using, to our 10-plus legacy EMRs, our home health system, payroll time and attendance. Like you said, that’s a wide variety of systems to keep up-to-date with antivirus solutions, making sure all of those are secure, especially with them being virtualized. All of those systems require a bunch of different exclusions and whatnot.With our previous EMR, it was really hard to get those exclusions working and to minimize false positives. Over the past several years, security demands have increased. There are a lot more PCs and servers in the environment. There are a lot more threats taking place in healthcare systems, some targeting protected health information (PHI) or financial data, and we needed a solution that would protect a wide variety of endpoints; something that we could keep up-to-date extremely easily, and that would cover a wide variety of systems and devices.Gardner: It seems like they’re adding more risk to this all the time, so it’s not just a matter of patching and keeping up. You need to be proactive, whenever possible. Being proactive is definitely key. We like to control applications to keep our systems even more secure, rather than just focusing on real-time threats. Yarbro: Yes, being proactive is definitely key. Some of the features that we like about our latest systems are that you can control applications, and we’re looking at doing that to keep our systems even more secure, rather than just focusing on real-time threats, and things like that.Gardner: Before we learn more about your security journey, tell us about Saint Francis Healthcare System, the size of organization and also the size of your IT department.Yarbro: Saint Francis is between St. Louis and Memphis. It’s the largest hospital between the two cities. It’s a medium-sized hospital with 308 beds. We have a Level III neonatal intensive care unit (NICU) and a Level III trauma center. We see and treat more than 700,000 people within a five-state area.With all of those beds, we have about 3,000 total staff, including referring physicians, contractors, and things like that. The IT help desk support, infrastructu[...]



Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing Center

2017-11-21T14:32:34.310-05:00

The next BriefingsDirect Voice of the Customer high-performance computing (HPC) success story interview examines how Pittsburgh Supercomputing Center (PSC) has developed a research computing capability, Bridges, and how that's providing new levels of analytics, insights, and efficiencies.We'll now learn how advances in IT infrastructure and memory-driven architectures are combining to meet the new requirements for artificial intelligence (AI), big data analytics, and deep machine learning.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to describe the inside story on building AI Bridges are Dr. Nick Nystrom, Interim Director of Research, and Paola Buitrago, Director of AI and Big Data, both at Pittsburgh Supercomputing Center. The discussion is moderated by Dana Gardner, principal analyst, at Interarbor Solutions.Here are some excerpts:Gardner: Let's begin with what makes Bridges unique. What is it about Bridges that is possible now that wasn't possible a year or two ago?NystromNystrom: Bridges allows people who have never used HPC before to use it for the first time. These are people in business, social sciences, different kinds of biology and other physical sciences, and people who are applying machine learning to traditional fields. They're using the same languages and frameworks that they've been using on their laptops and now that is scaling up to a supercomputer. They are bringing big data and AI together in ways that they just haven't done before.Gardner: It almost sounds like the democratization of HPC. Is that one way to think about it?Nystrom: It very much is. We have users who are applying tools like R and Python and scaling them up to very large memory -- up to 12 terabytes of random access memory (RAM) -- and that enables them to gain answers to problems they've never been able to answer before.Gardner: There is a user experience aspect, but I have to imagine there are also underlying infrastructure improvements that also contribute to user democratization.We stay in touch with the user community and we look at this from their perspective. What are the applications that they need to run? What we came up with is a very heterogeneous system.Nystrom: Yes, democratization comes from two things. First, we stay closely in touch with the user community and we look at this opportunity from their perspective first. What are the applications that they need to run? What do they need to do? And from there, we began to work with hardware vendors to understand what we had to build, and, what we came up with is a very heterogeneous system.We have three tiers of nodes having memories ranging from 128 gigabytes to 3 terabytes, to 12 terabytes of RAM. That's all coupled on the same very-high-performance fabric. We were the first installation in the world with the Intel Omni-Path interconnect, and we designed that in a custom topology that we developed at PSC expressly to make big data available as a service to all of the compute nodes with equally high bandwidth, low latency, and to let these new things become possible.Gardner: What other big data analytics benefits have you gained from this platform?BuitragoBuitrago: A platform like Bridges enables that which was not available before. There's a use case that was recently described by Tuomas Sandholm, [Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University. It involves strategic machine learning using Bridges HPC to play and win at Heads-Up, No-limit Texas Hold'em poker as a capabilities benchmark.]This is a perfect example of something that could not have been done without a supercomputer. A supercomputer enables massive and complex models that can actually give an accurate answer.Right now, we are collecting a lot of data. There's a convergence of having great capabilities right in the compute and storage -- and also having the big data to answer really important questions. Having a system like Brid[...]



How UBC gained TCO advantage via flash for its EduCloud cloud storage service

2017-11-20T14:50:41.488-05:00

The next BriefingsDirect cloud efficiency case studyexplores how a storage-as-a-service offering in a university setting gains performance and lower total cost benefits by a move to all-flash storage.We’ll now learn how the University of British Columbia (UBC) has modernized its EduCloud storage service and attained both efficiency as well as better service levels for its diverse user base. Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Here to help us explore new breeds of SaaS solutions is Brent Dunington, System Architect at UBC in Vancouver. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: How is satisfying the storagedemands at a large and diverse university setting a challenge? Is there something about your users and the diverse nature of their needs that provides you with a complex requirements list? DuningtonDunington: A university setting isn't much different than any other business. The demands are the same. UBC has about 65,000 students and about 15,000 staff. The students these days are younger kids, they all have iPhones and iPads, and they just want to push buttons and get instant results and instant gratification. And that boils down to the services that we offer. We have to be able to offer those services, because as most people know, there are choices -- and they can go somewhere else and choose those other products. Our team is a rather small team. There are 15 members in our team, so we have to be agile, we have to be able to automate things, and we need tools that can work and fulfill those needs. So it's just like any other business, even though it’s a university setting. HPEDeliversFlash PerformanceGardner: Can you give us a sense of the scale that describes your storage requirements?Dunington: We do SaaS, we also do infrastructure-as-a-service (IaaS). EduCloud is a self-service IaaS product that we deliver to UBC, but we also deliver it to 25 other higher institutions in the Province of British Columbia.We have been doing IaaS for five years, and we have been very, very successful. So more people are looking to us for guidance. Because we are not just delivering to UBC, we have to be up running and always able to deliver, because each school has different requirements. At different times of the year -- because there is registration, there are exam times -- these things have to be up. You can’t not be functioning during an exam and have 600 students not able to take the tests that they have been studying for. So it impacts their life and we want to make sure that we are there and can provide the services for what they need. Gardner: In order to maintain your service levels within those peak times, do you in your IaaS and storage services employ hybrid-cloud capabilities so that you can burst? Or are you doing this all through your own data center and your own private cloud?On-Campus CloudDunington: We do it all on-campus. British Columbia has a law that says all the data has to stay in Canada. It’s a data-sovereignty law, the data can't leave the borders. That's why EduCloud has been so successful, in my opinion, because of that option. They can just go and throw things out in the private cloud. The public cloud providers are providing more services in Canada: Amazon Web Services (AWS) and Microsoft Azure cloud are putting data centers in Canada, which is good and it gives people an option. Our team’s goal is to provide the services, whether it's a hybrid model or all on-campus. We just want to be able to fulfill those needs.Gardner: It sounds like the best of all worlds. You are able to give that elasticity benefit, a lot of instant service requirements met for your consumers. But you are starting to use cloud pay-as-you-go types of models and get the benefit of the public cloud model -- but with the security, control and manageability of the private clouds. What decision[...]



How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

2017-11-17T11:51:12.070-05:00

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.Here are some excerpts:Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to -- available tools.PetersIt’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.  Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?Cloudy cloud adoptionPeters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well -- that clouds everything.You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.All of the research we do shows that the world is hybrid for as far ahead as we can see. Running away from internal IT and on-premises IT is not going to be a good idea for most organizations -- at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.  Learn More AboutHybrid IT Management Solutions From HPEPeters: I often talk abo[...]



Kansas Development Finance Authority gains peace of mind, end-points virtual shield using hypervisor-level security

2017-11-14T15:29:53.114-05:00

Implementing and managing IT security has leaped in complexity for organizations ranging from small and medium-sized businesses (SMBs) to massive government agencies.Once-safe products used to thwart invasions now have been exploited. E-mail phishing campaigns are far more sophisticated, leading to damaging ransomware attacks.What’s more, the jack-of-all-trades IT leaders of the mid-market concerns are striving to protect more data types on and off premises, their workload servers and expanded networks, as well as the many essential devices of the mobile workforce.Security demands have gone up, yet there is a continual need for reduced manual labor and costs -- while protecting assets sooner and better.The next BriefingsDirect security strategies case study examines how a Kansas economic development organization has been able to gain peace of mind by relying on increased automation and intelligence in how it secures its systems and people.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  downloada copy.To explore how an all-encompassing approach to security has enabled improved results with fewer hours at a smaller enterprise, BriefingsDirect sat down with Jeff Kater, Director of Information Technology and Systems Architect at Kansas Development Finance Authority (KDFA) in Topeka. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: As a director of all of IT at KDFA, security must be a big concern, but it can’t devour all of your time. How have you been able to balance security demands with all of your other IT demands?Kater: That’s a very interesting question, and it has a multi-segmented answer. In years past, leading up to the development of what KDFA is now, we faced the trends that demanded very basic anti-spam solutions and the very basic virus threats that came via the web and e-mail.KaterWhat we’ve seen more recently is the growing trend of enhanced security attacks coming through malware and different exploits -- that were once thought impossible -- are now are the reality.Therefore in recent times, my percentage of time dedicated to security had grown from probably five to 10 percent all the way up to 50 to 60 percent of my workload during each given week.Gardner: Before we get to how you’ve been able to react to that, tell us about KDFA.Kater: KDFA promotes economic development and prosperity for the State of Kansas by providing efficient access to capital markets through various tax-exempt and taxable debt obligations.KDFA works with public and private entities across the board to identify financial options and solutions for those entities. We are a public corporate entity operating in the municipal finance market, and therefore we are a conduit finance authority.KDFA is a very small organization -- but a very important one. Therefore we run enterprise-ready systems around the clock, enabling our staff to be as nimble and as efficient as possible.There are about nine or 10 of us that operate here on any given day at KDFA. We run on a completely virtual environment platform via Citrix XenServer. So we run XenApp, XenDesktop, and NetScaler -- almost the full gamut of Citrix products.We have a few physical endpoints, such as laptops and iPads, and we also have the mobile workforce on iPhones as well. They are all interconnected using the virtual desktop infrastructure (VDI) approach.Gardner: You’ve had this swing, where your demands from just security issues have blossomed. What have you been doing to wrench that back? How do you get your day back, to innovate and put in place real productivity improvements?We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size. Kater: We went with virtualization via Citrix. It became our solution of choice due to not being willing to pay the extra tax[...]



Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

2017-11-13T17:04:58.082-05:00

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises.  Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or downloada copy. Here to report on how international companies must factor localization, data sovereignty and other regional cloud requirements into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.Here are some excerpts:Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud -- especially hybrid cloud -- when you’re straddling global regions?Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.BurrisThe downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.So, cloud and globalization can go together -- but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.Gardner: If you need to then focus more on the data issues -- such as compliance, regulation, and data sovereignty -- how is that different from taking an applications-centric view of things?Learn More AboutHybrid IT Management Solutions From HPEBurris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works. Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data -- either small or very large -- across different regions. It can be extremely expensive and sometimes impossible to even conceive [...]



How modern architects transform the messy mix of hybrid cloud into a force multiplier

2017-11-08T07:01:08.478-05:00

The next BriefingsDirect cloud strategies insights interview focuses on how IT architecture and new breeds of service providers are helping enterprises manage complex cloud scenarios.We’ll now learn how composable infrastructure and auto-scaling help improve client services, operations, and business goals attainment for a New York cloud services and architecture support provider. Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or downloada copy.Here to help us learn what's needed to reach the potential of multiple -- and often overlapping -- cloud models is Arthur Reyenger, Cloud Practice Lead and Chief Cloud Architect at International Integrated Solutions (IIS) Ltd. in New York.Here are some excerpts:Gardner: How are IT architecture and new breeds of service providers coming together? What’s different now from just a few years ago for architecture when we have cloud, multi-cloud, and hybrid cloud services? ReyengerReyenger: Like the technology trends themselves, everything is accelerating. Before, you would have three-year or even five-year plans that were developed by the business. They were designed to reach certain business outcomes, they would design the technology to support that and it was then heads-down to build my rocket ship.It’s changed now to where it’s a 12-month strategy that needs to be modular enough to be reevaluated at the end of those 12 months, and be re-architected -- almost as if it were made of Lego blocks.Gardner: More moving parts, less time.Reyenger:Absolutely.Gardner: How do you accomplish that? Reyenger: You leverage different cloud service providers, different managed services providers, and traditional value-added resellers, like International Integrated Solutions (IIS), in order to meet those business demands. We see a large push around automation, orchestration and auto-scaling. It’s becoming a way to achieve those business initiatives at that higher speed.Gardner: There is a cloud continuum. You are choosing which workloads and what data should be on-premises, and what should be in a cloud, or multi-clouds. Trying to do this as a regular IT shop -- buying it, specifying, integrating it -- seems like it demands more than the traditional IT skills. How is the culture of IT adjusting? Reyenger: Every organization, including ours, has its own business transformation that they have to undergo. We think that we are extremely proactive. I see some companies that are developing in-house skill sets, and trying to add additional departments that would be more cloud-aware in order to meet those demands.On the other side, you have folks that are leveraging partners like IIS, which has acumen within those spaces to supplement their bench, or they are building out a completely separate organization that will hopefully take them to the new frontier.Gardner: Tell us about your company. What have you done to transform?Get the Updated BookHPE Synergy for DummiesReyenger: IIS has spent 26 years building out an amazing book of business with amazing relationships with a lot of enterprise customers. But as times change, you need to be able to add additional practices like our cloud practice and our managed services practice. We have taken the knowledge we have around traditional IT services and then added in our internal developers and delivery consultants. They are very well-versed and aware of the new architecture. So we can marry the two together and help organizations reach that new end-state.It's very easy for startups to go 100 percent to the cloud and just run with it. It’s different when you have 2,000 existing applications and you want to move to the future as well. It’s nice to have someone who understands both of those worlds -- and the appropriate way to integrate them. Gardner: I suppose there is no typical cloud engagement, but[...]



As enterprises face mounting hybrid IT complexity, new management solutions beckon

2017-11-06T15:52:44.421-05:00

The next BriefingsDirect Voice of the Analyst interview examines how new machine learning and artificial intelligence (AI) capabilities are being applied to hybrid IT complexity challenges.We'll explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to grow and thrive as digital enterprises. Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to report on how companies and IT leaders are seeking new means to manage an increasingly complex transition to sustainable hybrid IT is Paul Teich, Principal Analyst at TIRIAS Research in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.Here are some excerpts:Gardner: Paul, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. There is also lingering concern about the complexity of managing so many fast-moving parts. We have legacy IT, private cloud, public cloud, software as a service (SaaS) and, of course, multi-cloud. So as someone who tracks technology and its consumption, how much has technology itself been tapped to manage this sprawl, if you will, across hybrid IT?TeichTeich: So far, not very much, mostly because of the early state of multi-cloud and the hybrid cloud business model. As you know, it takes a while for management technology to catch up with the actual compute technology and storage. So I think we are seeing that management is the tail of the dog, it’s getting wagged by the rest of it, and it just hasn’t caught up yet.Gardner: Things have been moving so quickly with cloud computing that few organizations have had an opportunity to step back and examine what’s actually going on around them -- never mind properly react to it. We really are playing catch up.Teich: As we look at the options available, the cloud giants -- the public cloud services -- don’t have much incentive to work together. So you are looking at a market where there will be third parties stepping in to help manage multi-cloud environments, and there’s a lag time between having those services available and having the cloud services available and then seeing the third-party management solution step in.Gardner: It’s natural to see that a specific cloud environment, whether it’s purely public like AWS or a hybrid like Microsoft Azure and Azure Stack, want to help their customers, but they want to help their customers all get to their solutions first and foremost. It’s a natural thing. We have seen this before in technology. There are not that many organizations willing to step into the neutral position of being ecumenical, of saying they want to help the customer first, manage it all from the first. As we look to how this might unfold, it seems to me that the previous models of IT management -- agent-based, single-pane-of-glass, and unfortunately still in some cases spreadsheets and Post-It notes -- have been brought to bear on this. But we might be in a different ball game, Paul, with hybrid IT, that there’s just too many moving parts, too much complexity, and that we might need to look at data-driven approaches. What is your take on that?  Learn More AboutHybrid IT Management Solutions From HPETeich: I think that’s exactly correct. One of the jokes in the industry right now is if you want to find your stranded instances in the cloud, cancel your credit card and AWS or Microsoft will be happy to notify you of all of the instances that you are no longer paying for because your credit card expired. It’s hard to keep track of this, because we don’t have adequate tools yet. When you are an IT manager and you have a lot of folks on public cloud services, you don't have a full picture. That single pane of glass, looking at a lot of data and i[...]



How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT’s ability to grow and thrive

2017-11-02T16:24:32.576-04:00

The next BriefingsDirect Voice of the Analyst interview examines how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice.We’ll now explore how mounting complexity and a lack of multi-cloudservices management maturity must be solved in order to have businesses grow and thrive as digital enterprises.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.   Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles joins us to report on how companies are managing an increasingly complex transition to sustainable hybrid IT. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. Here are some excerpts:Gardner: Tim, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. But there is also lingering concernabout how to best determine the right mix of cloud, what kinds of cloud, and how to mitigate the risks and manage change over time.As someone who regularly advises chief information officers (CIOs), who or which group is surfacing that is tasked with managing this cloud adoption and its complexity within these businesses? Who will be managing this dynamic complexity?CrawfordCrawford: For the short-term, I would say everyone. It’s not as simple as it has been in the past where we look to the IT organization as the end-all, be-all for all things technology. As we begin talking about different consumption models -- and cloud is a relatively new consumption model for technology -- it changes the dynamics of it. It’s the combination of changing that consumption model -- but then there’s another factor that comes into this. There is also the consumerization of technology, right? We are “democratizing” technology to the point where everyone can use it, and therefore everyone does use it, and they begin to get more comfortable with technology.It’s not as it used to be, where we would say, “Okay, I'm not sure how to turn on a computer.” Now, businesses may be more familiar outside of the IT organization with certain technologies. Bringing that full-circle, the answer is that we have to look beyond just IT. Cloud is something that is consumed by IT organizations. It’s consumed by different lines of business, too. It’s consumed even by end-consumers of the products and services. I would say it’s all of the above.  Learn More AboutHybrid IT Management Solutions From HPEGardner: The good news is that more and more people are able to -- on their own – innovate, to acquire cloud services, and they can factor those into how they obtain business objectives. But do you expect that we will get to the point where that becomes disjointed? Will the goodness of innovation become something that spins out of control, or becomes a negative over time?Crawford: To some degree, we’ve already hit that inflection-point where technology is being used in inappropriate ways. A great example of this -- and it’s something that just kind of raises the hair on the back of my neck -- is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “Go cloud.”The board should be very business-focused and instead they're dictating specific technology -- whether it’s the right technology or not. That’s really what this comes down to. What’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application?  Another example is folks that try and go all-in on cloud but aren’t necessarily thinking about what’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given applicatio[...]



Case study: How HCI-powered private clouds accelerate efficient digital transformation

2017-10-24T16:49:14.190-04:00

The next BriefingsDirect cloud efficiency case study examines how a world-class private cloud project evolved in the financial sector.We’ll now learn how public cloud-like experiences, agility, and cost structures are being delivered via a strictly on-premises model built on hyper-converged infrastructure for a risk-sensitive financial services company.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.Jim McKittrick joins to help explore the potential for cloud benefits when retaining control over the data center is a critical requirement. He is Senior Account Manager at Applied Computer Solutions (ACS) in Huntington Beach, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: Many enterprises want a private cloud for security and control reasons. They want an OpEx-like public cloud model, and that total on-premises control. Can you have it both ways?McKittrick: We are showing that you can. People are learning that the public cloud isn't necessarily all it has been hyped up to be, which is what happens with newer technologies as they come out.Gardner: What are the drivers for keeping it all private?McKittrickMcKittrick: Security, of course. But if somebody actually analyzes it, a lot of times it will be about cost and data access, and the ease of data egress because getting your data back can sometimes be a challenge.Also, there is a realization that even though I may have strict service-level agreements (SLAs), if something goes wrong they are not going to save my business. If that thing tanks, do I want to give that business away? I have some clients who absolutely will not.Gardner: Control, and so being able to sleep well at night.McKittrick: Absolutely. I have other clients that we can speak about who have HIPAA requirements, and they are privately held and privately owned. And literally the CEO says, “I am not doing it.” And he doesn’t care what it costs.Gardner: If there were a huge delta between the price of going with a public cloud or staying private, sure. But that deltais closing. So you can have the best of both worlds -- and not pay a very high penalty nowadays.McKittrick: If done properly, certainly from my experience. We have been able to prove that you can run an agile, cloud-like infrastructure or private cloud as cost-effectively -- or even more cost effectively -- than you can in the public clouds. There are certainly places for both in the market.Gardner: It's going to vary, of course, from company to company -- and even department to department within a company -- but the fact is that that choice is there.McKittrick: No doubt about it, it absolutely is.Gardner: Tell us about ACS, your role there, and how the company is defining what you consider the best of hybrid cloud environments.McKittrick: We are a relatively large reseller, about $600 million. We have specialized in data center practices for 27 years. So we have been in business quite some time and have had to evolve with the IT industry.We have a head start on what's really coming down the pipe -- we are one to two years ahead of the general marketplace. Structurally, we are fairly conventional from the standpoint that we are a typical reseller, but we pride ourselves on our technical acumen. Because we have some very, very large clients and have worked with them to get on their technology boards, we feel like we have a head start on what's really coming down the pipe --  we are maybe one to two years ahead of the general marketplace. We feel that we have a thought leadership edge there, and we use that as well as very senior engineering leadership in our organization to tell us what we are supposed to be doing. Gardner: I know you p[...]



Inside story on HPC’s AI role in Bridges 'strategic reasoning' research at CMU

2017-10-10T15:35:53.231-04:00

The next BriefingsDirect high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable -- even using imperfect information.We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can't just optimize as if you were the only actor -- because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.SandholmThat's what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning -- not all of it, but most of it until about 12 years ago -- was really about perfect information games like Othello, Checkers, Chess and Go. And there has been tremendous progress. But these complete information, or perfect information, games don't really model real business situations very well. Most business situations are of imperfect information. Know what you don’t knowSo you don't know the other guy's resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent's mistakes and learn to exploit them.So totally different techniques are needed, and this has way more applications in reality than perfect information games have.Gardner: In business, you don't always know the rules. All the variables are dynamic, and we don't know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both. Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw?Heads-Up No-Limit Texas Hold'em has become the leading benchmark in the AI community. Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold'em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the IC chip makers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold'em turned out to be great benchmark because it is a huge game of imperfect information.It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes -- it will still be more than that.Gardner: This is as close to infinity as you can probably get, right?Sandholm: Ha-ha, ba[...]



Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcomes

2017-10-10T11:41:23.349-04:00

The next BriefingsDirect healthcare transformation use-case discussion focuses on how an ecosystem approach to big data solutions brings about improved healthcare informatics-driven outcomes.We'll now learn how a Philips Healthcare Informatics and Hewlett Packard Enterprise (HPE)partnership creates new solutions for the global healthcare market and provides better health outcomes for patients by managing data and intelligence better.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Joining us to explain how companies tackle the complexity of solutions delivery in healthcare by using advanced big data and analytics is Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: Why are partnerships so important in healthcare informatics? Is it because there are clinical considerations combined with big data technology? Why are these types of solutions particularly dependent upon an ecosystem approach?Heemskerk: It’s exactly as you say, Dana. At Philips we are very strong at developing clinical solutions for our customers. But nowadays those solutions also require an IT infrastructure layer Heemskerkunderneath to solve the total equation. As such, we are looking for partners in the ecosystem because we at Philips recognize that we cannot do everything alone. We need partners in the ecosystem that can help address the total solution -- or the total value proposition -- for our customers. Gardner: I'm sure it varies from region to region, but is there a cultural barrier in some regard to bringing cutting-edge IT in particular into healthcare organizations? Or have things progressed to where technology and healthcare converge?Heemskerk: Of course, there are some countries that are more mature than others. Therefore the level of healthcare and the type of solutions that you offer to different countries may vary. But in principle, many of the challenges that hospitals everywhere are going through are similar. Some of the not-so-mature markets are also trying to leapfrog so that they can deliver different solutions that are up to par with the mature markets.Gardner: Because we are hearing a lot about big data and edge computing these days, we are seeing the need for analytics at a distributed architecture scale. Please explain how big data changes healthcare.Big data value addHeemskerk: What is very interesting for big data is what happens if you combine it with value-based care. It's a very interesting topic. For example, nowadays, a hospital is not reimbursed for every procedure that it does in the hospital – the value is based more on the total outcome of how a patient recovers.This means that more analytics need to be gathered across different elements of the process chain before reimbursement will take place. In that sense, analytics become very important for hospitals on how to measure on how things are being done efficiently, and determining if the costs are okay. Gardner: The same data that can used to be more efficient can also be used for better healthcare outcomes and understanding the path of the disease, or for the efficacy of procedures, and so on. A great deal can be gained when data is gathered and used properly.Heemskerk: That is correct. And you see, indeed, that there is much more data nowadays, and you can utilize it for all kind of different things.Learn About HPE Digital SolutionsThat Drive Healthcare and Life SciencesGardner: Please help us understand the relationship between your organization and HPE. Where does your part of the value begin and end, and[...]



Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe

2017-09-26T16:05:29.033-04:00

The next BriefingsDirect cloud ecosystem strategies interview explores how a Canadian software provider delivers a hybrid cloud platform for enterprises and service providers alike.We'll now learn how Ormuco has identified underserved regions and has crafted a standards-based hybrid cloud platform to allow its users to attain world-class cloud services just about anywhere.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.Here to help us explore how new breeds of hybrid cloud are coming to more providers around the globe thanks to the Cloud28+ consortium is Orlando Bayter, CEO and Founder of Ormuco in Montréal, and Xavier Poisson Gouyou Beachamps, Vice President of Worldwide Indirect Digital Services at Hewlett Packard Enterprise (HPE), based in Paris. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: Let’s begin with this notion of underserved regions. Orlando, why is it that many people think that public cloud is everywhere for everyone when there are many places around the world where it is still immature? What is the opportunity to serve those markets?Bayter: There are many countries underserved by the hyperscale cloud providers. If you look at Russia, United Arab Emirates (UAE), around the world, they want to comply with regulations on security, on data sovereignty, and they need to have the clouds locally to comply.BayterOrmuco targets those countries that are underserved by the hyperscale providers and enables service providers and enterprises to consume cloud locally, in ways they can’t do today.Gardner: Are you allowing them to have a private cloud on-premises as an enterprise? Or do local cloud providers offer a common platform, like yours, so that they get the best of both the private and public hybrid environment?Bayter: That is an excellent question. There are many workloads that cannot leave the firewall of an enterprise. With that, you now need to deliver the economies, ease of use, flexibility, and orchestration of a public cloud experience in the enterprise. At Ormuco, we deliver a platform that provides the best of the two worlds. You are still leaving your data center and you don't need to worry whether it’s on-premises or off-premises.It's a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services. It’s a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services. Gardner: What are the attributes of this platform that both your enterprise and service provider customers are looking for? What’s most important to them in this hybrid cloud platform?Bayter: As I said, there are some workloads that cannot leave the data center. In the past, you couldn’t get the public cloud inside your data center. You could have built a private cloud, but you couldn’t get an Amazon Web Services (AWS)-like solution or a Microsoft Azure-like solution on-premises.We have been running this now for two years and what we have noticed is that enterprises want to have the ease-of-use, sales, service, and orchestration on-premises. Now, they can connect to a public cloud based on the same platform and they don’t have to worry about how to connect it or how it will work. They just decide where to place this.They have security, can comply with regulations, and gain control -- plus 40 percent savings compared with VMware, and up to 50 percent to 60 percent compared with AWS.Gardner: I’m also interested in the openness of the platform. Do they have certain requirements as to t[...]



How Nokia refactors the video delivery business with new time-managed IT financing models

2017-09-19T17:22:06.798-04:00

The next BriefingsDirect IT financing and technology acquisition strategies interview examines how Nokiais refactoring the video delivery business. Learn both about new video delivery architectures and the creative ways media companies are paying for the technology that supports them.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to describe new models of Internet Protocol (IP) video and time-managed IT financing is Paul Larbey, Head of the Video Business Unit at Nokia, based in Cambridge, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: It seems that the video-delivery business is in upheaval. How are video delivery trends coming together to make it necessary for rethinking architectures? How are pricing models and business models changing, too? Larbey: We sit here in 2017, but let’s look back 10 years to 2007. There were a couple key events in 2007 that dramatically shaped how we all consume video today and how, as a company, we use technology to go to market. LarbeyIt’s been 10 years since the creation of the Apple iPhone. The iPhone sparked whole new device-types, moving eventually into the iPad. Not only that, Apple underneath developed a lot of technology in terms of how you stream video, how you protect video over IP, and the technology underneath that, which we still use today. Not only did they create a new device-type and avenue for us to watch video, they also created new underlying protocols. It was also 10 years ago that Netflix began to first offer a video streaming service. So if you look back, I see one year in which how we all consume our video today was dramatically changed by a couple of events. If we fast-forward, and look to where that goes to in the future, there are two trends we see today that will create challenges tomorrow. Video has become truly mobile. When we talk about mobile video, we mean watching some films on our iPad or on our iPhone -- so not on a big TV screen, that is what most people mean by mobile video today. The future is personalized  When you can take your video with you, you want to take all your content with you. You can’t do that today. That has to happen in the future. When you are on an airplane, you can’t take your content with you. You need connectivity to extend so that you can take your content with you no matter where you are. Take the simple example of a driverless car. Now, you are driving along and you are watching the satellite-navigation feed, watching the traffic, and keeping the kids quiet in the back. When driverless cars come, what you are going to be doing? You are still going to be keeping the kids quiet, but there is a void, a space that needs to be filled with activity, and clearly extending the content into the car is the natural next step. And the final challenge is around personalization. TV will become a lot more personalized. Today we all get the same user experience. If we are all on the same service provider, it looks the same -- it’s the same color, it’s the same grid. There is no reason why that should all be the same. There is no reason why my kids shouldn’t have a different user interface. There is no reason why I should have 10 pages of channels that I have to through to find something that I want to watch. The user interface presented to me in the morning may be different than the user interface presented to me in the evening. There is no reason why I should have 10 pages of channels that I have to go through to find something that I want to watch. Why aren’t all those channels specifically curated for [...]



IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT

2017-09-05T13:49:23.691-04:00

The next BriefingsDirect Internet of Things (IoT)strategies insights interview focuses on how a Miami telecommunications products provider has developed new breeds of services to help manage complex edge and data scenarios.We will now learn how IoT platforms and services help to improve network services, operations, and business goals -- for carriers and end users alike.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to help us explore what is needed to build an efficient IoT support business is Andres Sanchez, CEO of Identidad IoT in Miami. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: How has your business changed in the telecoms support industry and why is IoT such a big opportunity for you?Sanchez: With the new OTT (Over the Top content) technology, and the way that it came into the picture and took part of the whole communications chain of business, the business is basically getting very tough in telecoms. When we begin evaluating what IoT can do and seeing the possibilities, this is a new wave. We understand that it's not about connectivity, it's not about the 10 percent of the value chain -- it's more about the solutions. SanchezWe saw a very good opportunity to start something new and to take the experience we have with the technology that we have in telecoms, and get new people, get new developers, and start building solutions, and that's what we are doing right now.Gardner: So as the voice telecoms business trails off, there is a new opportunity at the edge for data and networks to extend for a variety of use cases. What are some the use cases that you are seeing now in IoT that is a growth opportunity for your business?Sanchez: IoT is everywhere. The beauty of IoT is that you can find solutions everywhere you look. What we have found is that when people think about IoT, they think about connected home, they think about connected car, or the smart parking where it's just a green or red light when the parking is occupied or not. But IoT is more than that.There are two ways to generate revenue in IoT. One is by having new products. The second is understanding what it is on the operational level that we can do better. And it’s in this way that we are putting in sensors, measuring things, and analyzing things. You can basically reduce your operational cost, or be more effective in the way that you are doing business. It's not only getting the information, it's using that information to automate processes that it will make your company better.Gardner: As organizations recognize that there are new technologies coming in that are enabling this smart edge, smart network, what is it that’s preventing them from being able to take advantage of this? Manage your solutions with the HPEUniversal IoT PlatformSanchez: Companies think that they just have to connect the sensors, that they only have to digitize their information. They haven’t realized that they really have to go through a digital transformation. It's not about connecting the sensors that are already there; it's building a solution using that information. They have to reorganize and to reinvent their organizations.For example, it's not about taking a sensor, putting the sensor in the machine and just start taking information and watching it on a screen. It’s taking the information and being able to see and check special patterns, to predict when a machine is going to break, when a machine at certain temperatures starts to work better or worse. It's being able to be more productive without having to do more work. It’s ju[...]



Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment

2017-08-30T13:33:05.129-04:00

The next BriefingsDirect inside story interview explores how a software-defined data center (SDDC)-focused systems integrator developed an ultimate open-source object storageenvironment.We’re now going to learn how Key Information Systems crafted a storage capability that may have broad extensibility into such realms as hybrid cloud and multi-cloud support. Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.  Here to help us better understand a new approach to open-source object storage is Clayton Weise, Director of Cloud Services at Key Information Systems in Agoura Hills, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: What prompted you to improve on the way that object storage is being offered as a service? How might this become a new business opportunity for you? Weise: About a year ago, at Hewlett Packard Enterprise (HPE)Discover, I was wandering the event floor. We had just gotten out of a meeting with SwitchNAP, which is a major data center in Las Vegas. We had been talking to them about some preferred concepts and deployments for storage for their clients. WeiseThat discussion evolved into realizing that there are number of clients inside of Switch and their ecosystem that could make use of storage that was more locally based, that needed to be closer at hand. There were cost savings that could be gained if you have a connection within the same data center, or within the same fiber network.Pulling data in and out of a cloudUnder this model, there would be significantly less expensive ways of pulling data in and out of a cloud, since you wouldn’t have transfer fees as you normally would. There would also be an advantage to privacy, and to cutting latency, and other beneficial things because of a private network all run by Switch and through their fiber network. So we looked at this and thought this might be interesting. In discussions with the number of groups within HPE while wandering the floor at Discover, we found that there were some pretty interesting ways that we could play games with the network to allow clients to not have to uproot the way they do things, or force them to do things, for lack of a better term, “Our way.”  If you go to Amazon Web Services or you go to Microsoft Azure, you do it the Microsoft way, or you do it the Amazon way. You don’t really have a choice, since you have to follow their guidelines. They generally use object storage as an inexpensive way to store archival or less-frequently accessed data. Cloud storage became an alternative to tape and long-term storage.  Where we saw value is, there are times in the mid-market space for clients -- ranging from a couple of hundred million dollars up to maybe a couple of billion dollars in annual revenue -- where they generally use object storage as kind of an inexpensive way to store archival, or less-frequently accessed, data. So [the cloud storage] became an alternative to tape and long-term storage. We've had this massive explosion of unstructured data, files, and all sorts of things. We have a number of clients in medical and finance, and they have just seen this huge spike in data.The challenge is: To deploy your own object storage is a fairly complex operation, and it requires a minimum number of petabytes to get started. In that mid-market, they are not typically measuring their storage in that petabytes level. These customers are more typically in the tens to hundreds of terabytes range, and so they need an inexpensive way to offload th[...]



How IoT and OT collaborate to usher in the data-driven factory of the future

2017-08-22T14:49:32.278-04:00

The next BriefingsDirect Internet of Things (IoT) technology trends interview explores how innovation is impacting modern factories and supply chains.We’ll now learn how a leading-edge manufacturer, Hirotec, in the global automotive industry, takes advantage of IoT and Operational Technology (OT) combined to deliver dependable, managed, and continuous operations.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to help us to find the best factory of the future attributes is Justin Hester, Senior Researcher in the IoT Lab at Hirotec Corp. in Hiroshima, Japan. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: What's happening in the market with business and technology trends that’s driving this need for more modern factories and more responsive supply chains?Hester: Our customers are demanding shorter lead times. There is a drive for even higher quality, especially in automotive manufacturing. We’re also seeing a much higher level of customization requests coming from our customers. So how can we create products that better match the unique needs of each customer?As we look at how we can continue to compete in an ever-competitive environment, we are starting to see how the solutions from IoT can help us.Gardner: What is it about IoT and Industrial IoT (IIoT) that allows you to do things that you could not have done before?HesterHester: Within the manufacturing space, a lot of data has been there for years; for decades. Manufacturing has been very good at collecting data. The challenges we've had, though, is bringing in that data in real-time, because the amount of data is so large. How can we act on that data quicker, not on a day-by-day basis or week-by-week basis, but actually on a minute-by-minute basis, or a second-by-second basis? And how do we take that data and contextualize it?It's one thing in a manufacturing environment to say, “Okay, this machine is having a challenge.” But it’s another thing if I can say, “This machine is having a challenge, and in the context of the factory, here's how it's affecting downstream processes, and here's what we can do to mitigate those downstream challenges that we’re going to have.” That’s where IoT starts bringing us a lot of value.The analytics, the real-time contextualization of that data that we’ve already had in the manufacturing area, is very helpful.Gardner: So moving from what may have been a gather, batch, analyze, report process -- we’re now taking more discrete analysis opportunities and injecting that into a wider context of efficiency and productivity. So this is a fairly big change. This is not incremental; this is a step-change advancement, right?A huge step-change  Hester: It’s a huge change for the market. It's a huge change for us at Hirotec. One of the things we like to talk about is what we jokingly call the Tuesday Morning Meeting. We talk about this idea that in the morning at a manufacturing facility, everyone gets together and talks about what happened yesterday, and what we can do today to make up for what happened yesterday.Why don't we get the data to the right people with the right context and let them make a decision so they can affect what's going on, instead of waiting until tomorrow to react?  Instead, now we’re making that huge step-change to say,  “Why don't we get the data to the right people with the right context and let them make a decision so they can affect what's going on, instead of waiting until tomorrow to [...]



DreamWorks Animation crafts its next era of dynamic IT infrastructure

2017-08-15T13:50:36.877-04:00

The next BriefingsDirect Voice of the Customer thought leader interview examines how DreamWorks Animation is building a multipurpose, all-inclusive, and agile data center capability.Learn here why a new era of responsive and dynamic IT infrastructure is demanded, and how one high-performance digital manufacturing leader aims to get there sooner rather than later. Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to describe how an entertainment industry innovator leads the charge for bleeding-edge IT-as-a-service capabilities is Jeff Wike, CTO of DreamWorks Animation in Glendale, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: Tell us why the older way of doing IT infrastructure and hosting apps and data just doesn't cut it anymore. What has made that run out of gas? Wike: You have to continue to improve things. We are in a world where technology is advancing at an unbelievable pace. The amount of data, the capability of the hardware, the intelligence of the infrastructure are coming. In order for any business to stay ahead of the curve -- to really drive value into the business – it has to continue to innovate.Gardner: IT has become more pervasive in what we do. I have heard you all refer to yourselves as digital manufacturing. Are the demands of your industry also a factor in making it difficult for IT to keep up?Wike: When I say we are a digital manufacturer, it’s because we are a place that manufacturers content, whether it's animated films or TV shows; that content is all made on the computer. An artist sits in front of a workstation or a monitor, and is basically building these digital assets that we put through simulations and rendering so in the end it comes together to produce a movie.WikeThat's all about manufacturing, and we actually have a pipeline, but it's really like an assembly line. I was looking at a slide today about Henry Ford coming up with the first assembly line; it's exactly what we are doing, except instead of adding a car part, we are adding a character, we’re adding a hair to a character, we’re adding clothes, we’re adding an environment, and we’re putting things into that environment.We are manufacturing that image, that story, in a linear way, but also in an iterative way. We are constantly adding more details as we embark on that process of three to four years to create one animated film.Gardner: Well, it also seems that we are now taking that analogy of the manufacturing assembly line to a higher plane, because you want to have an assembly line that doesn't just make cars -- it can make cars and trains and submarines and helicopters, but you don't have to change the assembly line, you have to adjust and you have to utilize it properly. So it seems to me that we are at perhaps a cusp in IT where the agility of the infrastructure and its responsiveness to your workloads and demands is better than ever.Greater creativity, increased efficiencyWike: That's true. If you think about this animation process or any digital manufacturing process, one issue that you have to account for is legacy workflows, legacy software, and legacy data formats -- all these things are inhibitors to innovation. There are a lot of tools. We actually write our own software, and we’re very involved in projects related to computer science at the studio.We’ll ask ourselves, “How do you innovate? How can you change your environment to be able to move forward and innovate and sti[...]



Enterprises look for partners to make the most of Microsoft Azure Stack apps

2017-08-01T15:23:29.722-04:00

The next BriefingsDirect Voice of the Customer hybrid cloud advancements discussion explores the application development and platform-as-a-service (PaaS) benefits from Microsoft Azure Stack. We’ll now learn how ecosystems of solutions partners are teaming to provide specific vertical industries with applications and services that target private cloud deployments.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to help us explore the latest in successful cloud-based applications development and deployment is our panel, Martin van den Berg, Vice President and Cloud Evangelist at Sogeti USA, based in Cleveland, and Ken Won, Director of Cloud Solutions Marketing at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: Martin, what are some of the trends that are driving the adoption of hybrid cloud applications specifically around the Azure Stack platform?Van den Berg: What our clients are dealing with on a daily basis is an ever-expanding data center, they see ever-expanding private clouds in their data centers. They are trying to get into the hybrid cloud space to reap all the benefits from both an agility and compute perspective.van den BergThey are trying to get out of the data center space, to see how the ever-growing demand can leverage the cloud. What we see is that Azure Stack will bridge the gap between the cloud that they have on-premises, and the public cloud that they want to leverage -- and basically integrate the two in a true hybrid cloud scenario.Gardner: What sorts of applications are your clients calling for in these clouds? Are these cloud-native apps, greenfield apps? What are they hoping to do first and foremost when they have that hybrid cloud capability?Van den Berg: We see a couple of different streams there. One is the native-cloud development. More and more of our clients are going into cloud-native development. We recently brought out a white paper wherein we see that 30 percent of applications being built today are cloud-native already. We expect that trend to grow to more than 60 percent over the next three years for new applications.HPE Partnership Case StudiesShow Powerof Flex Capacity Financing The issue that some of our clients have has to do with some of the data being consumed in these applications. Either due to compliance issues, or that their information security divisions are not too happy, they don’t want to put this data in the public cloud. Azure Stack bridges that gap as well.  They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability. Microsoft Azure Stack can bridge the gap between the on-premises data center and what they do in the cloud. They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.On the other hand, what we also see is that some of our clients are looking at Azure Stack as a bridge to gap the infrastructure-as-a-service (IaaS) space. Even in that space, where clients are not willing to expand their own data center footprint, they can use Azure Stack as a means to seamlessly go to the Azure public IaaS cloud.Gardner: Ken, does this jibe with what you are seeing at HPE, that people are starting to creatively leverage hybrid models? For example, are they putting apps in one type of cloud and data in another, a[...]



How a Florida school district tames the wild west of education security at scale and on budget

2017-07-26T14:20:26.544-04:00

Bringing a central IT focus to large public school systems has always been a challenge, but bringing a security focus to thousands of PCs and devices has been compared to bringing law and order to the Wild West. For the Clay County School District in Florida, a team of IT administrators is grabbing the bull by the horns nonetheless to create a new culture of computing safety -- without breaking the bank. The next BriefingsDirect security insight’s discussion examines how Clay County is building a secure posture for their edge, network, and data centers while allowing the right mix and access for exploration necessary in an educational environment. Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. To learn how to ensure that schools are technically advanced and secure at low cost and at high scale, we're joined by Jeremy Bunkley, Supervisor of the Clay County School District Information and Technology Services Department; Jon Skipper, Network Security Specialist at the Clay County School District, and Rich Perkins, Coordinator for Information Services at the Clay County School District. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: What are the biggest challenges to improving security, compliance, and risk reduction at a large school district? Bunkley: I think the answer actually scales across the board. The problem even bridges into businesses. It’s the culture of change -- of making people recognize security as a forethought, instead of an afterthought. It has been a challenge in education, which can be a technology laggard. Getting people to start the recognition process of making sure that they are security-aware has been quite the battle for us. I don’t think it’s going to end anytime soon. But we are starting to get our key players on board with understanding that you can't clear-text Social Security numbers and credit card numbers and personally identifiable information (PII). It has been an interesting ride for us, let’s put it that way.Gardner: Jon, culture is such an important part of this, but you also have to have tools and platforms in place to help give reinforcement for people when they do the right thing. Tell us about what you have needed on your network, and what your technology approach has been?SkipperSkipper: Education is one of those weird areas where the software development has always been lacking in the security side of the house. It has never even been inside the room. So one of the things that we have tried to do in education, at least with the Clay County School District, is try to modify that view, with doing change management. We are trying to introduce a security focus. We try to interject ourselves and highlight areas that might be a bad practice. One of our vendors uses plain text for passwords, and so we went through with them and showed them how that’s a bad practice, and we made a little bit of improvement with that.I evaluate our policies and how we manage the domains, maybe finding some stuff that came from a long time ago where it's no longer needed. We can pull the information out, whereas before they put all the Social Security numbers into a document that was no longer needed. We have been trying really hard to figure that stuff out and then to try and knock it down, as much as we can.Access for all, but not all-accessGardner: Whenever you are trying to change people's perceptions, behaviors, cult[...]



How Imagine Communications leverages edge computing and HPC for live multiscreen IP video

2017-07-25T15:13:13.588-04:00

The next BriefingsDirect Voice of the Customer HPC and edge computing strategies interview explores how a video delivery and customization capability has moved to the network edge -- and closer to consumers -- to support live, multi-screen Internet Protocol (IP) entertainment delivery.   Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. We’ll learn how hybrid technology and new workflows for IP-delivered digital video are being re-architected -- with significant benefits to the end-user experience, as well as with new monetization values to the content providers.Our guest is Glodina Connan-Lostanlen, Chief Marketing Officer at Imagine Communications in Frisco, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.Here are some excerpts:Gardner: Your organization has many major media clients. What are the pressures they are facing as they look to the new world of multi-screen video and media?Connan-Lostanlen: The number-one concern of the media and entertainment industry is the fragmentation of their audience. We live with a model supported by advertising and subscriptions that rely primarily on linear programming, with people watching TV at home.Connan-LostanlenAnd guess what? Now they are watching it on the go -- on their telephones, on their iPads, on their laptops, anywhere. So they have to find the way to capture that audience, justify the value of that audience to their advertisers, and deliver video content that is relevant to them. And that means meeting consumer demand for several types of content, delivered at the very time that people want to consume it.  So it brings a whole range of technology and business challenges that our media and entertainment customers have to overcome. But addressing these challenges with new technology that increases agility and velocity to market also creates opportunities.For example, they can now try new content. That means they can try new programs, new channels, and they don’t have to keep them forever if they don’t work. The new models create opportunities to be more creative, to focus on what they are good at, which is creating valuable content. At the same time, they have to make sure that they cater to all these different audiences that are either static or on the go.This is a major, perhaps once-in-a-generation, level of change -- when you go to fully digital, IP delivered content.Gardner: The media industry has faced so much change over the past 20 years, but this is a major, perhaps once-in-a-generation, level of change -- when you go to fully digital, IP-delivered content. As you say, the audience is pulling the providers to multi-screen support, but there is also the capability now -- with the new technology on the back-end -- to have much more of a relationship with the customer, a one-to-one relationship and even customization, rather than one-to-many. Tell us about the drivers on the personalization level.Connan-Lostanlen: That’s another big upside of the fragmentation, and the advent of IP technology -- all the way from content creation to making a program and distributing it. It gives the content creators access to the unique viewers, and the ability to really engage with them -- knowing what they like -- and then to potentially target advertising to them. The technology is there. The challenge remains about how to justify the business model, how to value the targeted advertising; ther[...]



How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures process and IT ills

2017-07-24T11:59:48.036-04:00

The next BriefingsDirect healthcare thought leadership panel discussion examines how a global standards body, The Open Group, is working to improve how the healthcare industry functions.We’ll now learn how The Open Group Healthcare Forum (HCF) is advancing best practices and methods for better leveraging IT in healthcare ecosystems. And we’ll examine the forum’s Health Enterprise Reference Architecture (HERA) initiative and its role in standardizing IT architectures. The goal is to foster better boundaryless interoperability within and between healthcare public and private sector organizations.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. To learn more about improving the processes and IT that better supports healthcare, please welcome our panel of experts: Oliver Kipf, The Open Group Healthcare Forum Chairman and Business Process and Solution Architect at Philips, based in Germany; Dr. Jason Lee, Director of the Healthcare Forum at The Open Group, in Boston, and Gail Kalbfleisch, Director of the Federal Health Architecture at the US Department of Health and Human Services in Washington, D.C. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. Here are some excerpts:Gardner: For those who might not be that familiar with the Healthcare Forum and The Open Group in general, tell us about why the Healthcare Forum exists, what its mission is, and what you hope to achieve through your work.LeeLee: The Healthcare Forum exists because there is a huge need to architect the healthcare enterprise, which is approaching 20 percent of the gross domestic product (GDP) of the economy in the US, and approaching that level in other developing countries in Europe.There is a general feeling that enterprise architecture is somewhat behind in this industry, relative to other industries. There are important gaps to fill that will help those stakeholders in healthcare -- whether they are in hospitals or healthcare delivery systems or innovation hubs in organizations of different sorts, such as consulting firms. They can better leverage IT to achieve business goals, through the use of best practices, lessons learned, and the accumulated wisdom of the various Forum members over many years of work. We want them to understand the value of our work so they can use it to address their needs.Our mission, simply, is to help make healthcare information available when and where it’s needed and to accomplish that goal through architecting the healthcare enterprise. That’s what we hope to achieve.Gardner: As the chairman of the HCF, could you explain what a forum is, Oliver? What does it consist of, how many organizations are involved?KipfKipf: The HCF is made up of its members and I am really proud of this team. We are very passionate about healthcare. We are in the technology business, so we are more than just the governing bodies; we also have participation from the provider community. That makes the Forum true to the nature of The Open Group, in that we are global in nature, we are vendor-neutral, and we are business-oriented. We go from strategy to execution, and we want to bridge from business to technology. We take the foundation of The Open Group, and then we apply this to the HCF.As we have many health standards out there, we really want to leverage [experience] from our 30 members to make standards work by providing the right type of tools, frameworks, a[...]



Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack

2017-07-21T13:45:55.443-04:00

The next BriefingsDirect cloud deployment strategies interview explores how hybrid cloud ecosystem players such as PwC and Hewlett Packard Enterprise (HPE) are gearing up to support the Microsoft Azure Stack private-public cloud continuum.We’ll now learn what enterprises can do to make the most of hybrid cloud models and be ready specifically for Microsoft’s solutions for balancing the boundaries between public and private cloud deployments.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Here to explore the latest approaches for successful hybrid IT, we’re joined by Rohit “Ro” Antao, a Partner at PwC, and Ken Won, Director of Cloud Solutions Marketing at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. Here are some excerpts:Gardner: Ro, what are the trends driving adoption of hybrid cloud models, specifically Microsoft Azure Stack? Why are people interested in doing this?Antao: What we have observed in the last 18 months is that a lot of our clients are now aggressively pushing toward the public cloud. In that journey there are a couple of things that are becoming really loud and clear to them.Journey to the cloudNumber one is that there will always be some sort of a private data center footprint. There are certain workloads that are not appropriate for the public cloud; there are certain workloads that perform better in the private data center. And so the first acknowledgment is that there is going to be that private, as well as public, side of how they deliver IT services.Now, that being said, they have to begin building the capabilities and the mechanisms to be able to manage these different environments seamlessly. As they go down this path, that's where we are seeing a lot of traction and focus. The other trend in conjunction with that is in the public cloud space where we see a lot of traction around Azure. They have come on strong. They have been aggressively going after the public cloud market. Being able to have that seamless environment between private and public with Azure Stack is what’s driving a lot of the demand.WonWon: We at HPE are seeing that very similarly, as well. We call that “hybrid IT,” and we talk about how customers need to find the right mix of private and public -- and managed services -- to fit their businesses. They may put some services in a public cloud, some services in a private cloud, and some in a managed cloud. Depending on their company strategy, they need to figure out which workloads go where.We have these conversations with many of our customers about how do you determine the right placement for these different workloads -- taking into account things like security, performance, compliance, and cost -- and helping them evaluate this hybrid IT environment that they now need to manage.Gardner: Ro, a lot of what people have used public cloud for is greenfield apps -- beginning in the cloud, developing in the cloud, deploying in the cloud -- but there's also an interest in many enterprises about legacy applications and datasets. Is Azure Stack and hybrid cloud an opportunity for them to rethink where their older apps and data should reside?Antao: Absolutely. When you look at the broader market, a lot of these businesses are competing today in very dynamic markets. When companies today think about strategy, it's no longer the 5- and 10-year strategy. The[...]



Advanced IoT systems provide analysis catalyst for the petrochemical refinery of the future

2017-07-18T14:44:55.767-04:00

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) technology trends interview explores how IT combines with IoT to help create the refinery of the future. We’ll now learn how a leading-edge petrochemical company in Texas is rethinking data gathering and analysis to foster safer environments and greater overall efficiency.Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. To help us define the best of the refinery of the future vision is Doug Smith, CEO of Texmark Chemicals in Galena Park, Texas, and JR Fuller, Worldwide Business Development Manager for Edgeline IoT at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions. Here are some excerpts:Gardner: What are the top trends driving this need for a new refinery of the future? Doug, why aren’t the refinery practices of the past good enough?Smith: First of all, I want to talk about people. People are the catalysts who make this refinery of the future possible. At Texmark Chemicals, we spent the last 20 years making capital investments in our infrastructure, in our physical plant, and in the last four years we have put together a roadmap for our IT needs. Through our introduction to HPE, we have entered into a partnership that is not just a client-customer relationship. It’s more than that, and it allows us to work together to discover IoT solutions that we can bring to bear on our IT challenges at Texmark. So, we are on the voyage of discovery together -- and we are sailing out to sea. It’s going great.Gardner: JR, it’s always impressive when a new technology trend aids and abets a traditional business, and then that business can show through innovation what should then come next in the technology. How is that back and forth working? Where should we expect IoT to go in terms of business benefits in the not-to-distant future?FullerFuller: One of powerful things about the partnership and relationship we have is that we each respect and understand each other's “swim lanes.” I’m not trying to be a chemical company. I’m trying to understand what they do and how I can help them.And they’re not trying to become an IT or IoT company. Their job is to make chemicals; our job is to figure out the IT. We’re seeing in Texmark the transformation from an Old World economy-type business to a New World economy-type business.This is huge, this is transformational. As Doug said, they’ve made huge investments in their physical assets and what we call Operational Technology (OT). They have done that for the past 20 years. The people they have at Texmark who are using these assets are phenomenal. They possess decades of experience.Learn From Customers Who Realize the IoT Advantage Read MoreYet IoT is really new for them. How to leverage that? They have said, “You know what? We squeezed as much as we can out of OT technology, out of our people, and our processes. Now, let’s see what else is out there.”And through introductions to us and our ecosystem partners, we’ve been able to show them how we can help squeeze even more out of those OT assets using this new technology. So, it’s really exciting.Gardner: Doug, let’s level-set this a little bit for our audience. They might not all be familiar with the refinery business, or even the petrochemical industry. You[...]