Subscribe: Customer Experience Matrix
http://customerexperiencematrix.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
based  cdp  cdps  cloud  companies  customer data  customer  data  it’s  marketing  much  system  systems  vendors   
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Customer Experience Matrix

Customer Experience Matrix



This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is Principal at Raab Associates Inc. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a comp



Updated: 2018-04-26T01:19:14.851-04:00

 



Building Trust Requires Innovation

2018-04-14T09:28:59.998-04:00

Trust has been chasing me like a hungry mosquito. It seems that everyone has suddenly decided that creating trust is the key to success, whether it’s in the context of data sharing, artificial intelligence, or customer retention. Of course, I reached that conclusion quite some time ago (see this blog from late 2015) so I’m pleased to have the company.   But I’m also trying to figure out where we all need to go next.I picked up a book on trust the other day (haven’t gotten past the introduction, so can’t say yet whether I’d recommend it) that seems to argue the problem of trust is different today because trust was traditionally based on central authority but authority itself has largely collapsed. The author sees a new, distributed trust model built on transparent, peer-based reputation (think Uber and Airbnb)* that lets people confidently interact with strangers. The chapter headings suggest she ends up proposing blockchain as the ultimate solution. This seems like more weight than any technology can bear and may just be evidence of silver bullet syndrome.   But it does hint at why blockchain has such great appeal: it’s precisely in tune with the anti-authority tenor of our times.From a marketer’s perspective, what’s important here is not that blockchain might provide trust but that conventional authority certainly cannot. This means that most trust-building methods marketers naturally rely on, which are based in traditional authority, probably won’t work. Things like celebrity endorsements, solemn personal promises from the CEO, and references to company size or history carry little weight in a hyper-skeptical environment. Even consumer reviews and peer recommendations are suspect in a world where people don’t trust that they’re genuine. What’s needed are methods that let people see things for themselves: a sort of radical transparency that doesn’t require relying on anyone else’s word, including (or perhaps especially) the word of “people just like me”.One familiar example is comparison shopping engines that don’t recommend a particular product but  make it easy for users to compare alternatives and pick the option they like best. A less obvious instance would be a navigation app that shows traffic conditions and estimated times for alternate routes: it might present what it considers the best choice but also makes it easy for the user to see what’s happening and, implicitly, why the system’s recommendation makes sense. Other examples include package tracking apps that remove uncertainty (and thus reduce anxiety) by showing the movement of a shipment towards the customer, customer service apps that track the location of a repair person as he approaches for a service call, or phone queue systems that estimate waiting time and state how many customers are ahead of the caller.  A determined skeptic could argue that such solutions can't be trusted because the systems themselves could be dishonest.  But any falsehoods would quickly become apparent when a package or repair person didn’t arrive as expected, so they are ultimately self-validating.Of course, many activities are not so easily verified. Claims related to data sharing are high on that list: it’s pretty much impossible for a customer to know how their data has been used or whether it has been shared without their permission. This is the European Union’s approach to privacy in the General Data Protection Regulation (GDPR) makes so much sense: the rules include a requirement to track each use of personal data, documentation of authority for that use, and a right of individuals to see the history of use. That’s very different attitude from the U.S. approach, which has much looser consent requirements and no individual rights to review companies' actual behaviors.  In other words, the EU approach creates a forced transparency that builds trust, especially false information would be a legally-punishable offense. There’s a slender chance that the GDPR approach will be [...]



Adobe Adds Experience Cloud Profile: Why It's Good News for Customer Data Platforms

2018-03-28T20:14:19.493-04:00

"A CDP by any other name still stores unified customer data."Adobe on Tuesday announced the Experience Cloud Profile, which it described as a “complete, real-time view of customers” including data from outside of Adobe Cloud systems. The announcement was frustratingly vague but some ferreting around* uncovered this blog post by Adobe VP of Product Engineering Anjul Bhambhri, who clarified that (a) the new product will persistently store data ingested from all sources and (b) perform the identity stitching needed to build a meaningfully unified customer view. Adobe doesn’t use the term Customer Data Platform but that’s exactly what they’ve described here. So, unlike last week's news that Salesforce is buying MuleSoft, this does have the potential to offer a viable alternative to stand-alone CDP products.Of course, the devil is in the details but this is still a significant development. Adobe’s offering is well thought out, including not just an Azure database to provide storage but also an open source Experience Data Model to simplify sharing of ingested data and compatible connectors from SnapLogic, Informatica, TMMData, and Microsoft Dynamics to make dozens of sources immediately available. Adobe even said they’ve built in GDPR-required controls over data sharing, which is a substantial corporate pain point and key CDP use case. The specter of competition from the big marketing clouds has always haunted the CDP market. Salesforce’s MuleSoft deal was a dodged bullet but the Adobe announcement seems like a more palpable hit.** Yet the blow is far from fatal – and could actually make the market stronger over time. Let me explain.First the bad news: Adobe now has a reasonable product to offer clients who might otherwise be frustrated by the lack of integration of its existing Experience Cloud products. This has been a substantial and widely recognized pain point. Tony Byrne of the Real Story Group has been particularly vocal on the topic. The Experience Cloud Profile doesn’t fully integrate Adobe’s separate products, but it does seem to let them share a rich set of customer data. That’s exactly the degree of integration offered by a CDP. So any Adobe client interested in a CDP will surely take a close look at the new offering.The good news is that not everyone is an Adobe client. It’s true that the Cloud Profile could in theory be used on its own but Adobe would need to price it very aggressively to attract companies that don’t already own other Adobe components. The could of course be an excellent acquisition strategy but we don’t know if it’s what Adobe has in mind. (I haven’t seen anything about the Cloud Profile pricing but it’s a core service of the Adobe Experience Platform, which isn’t cheap.) What this means is that Adobe is now educating the market about the value of a persistent, unified, comprehensive, open customer database – that is, about the value of CDPs. This should make it much easier for CDP vendors to sell their products to non-Adobe clients and even to compete with Adobe to deliver CDP functions to Adobe’s own clients. I’ll admit I have a vested interest in the success of the CDP market, as inventor of the term and founder of the CDP Institute. So I’m not entirely objective here. But as CDP has climbed to the peak of the hype cycle, I’ve been exquisitely aware that it has no place to go but down – and that this is inevitable. The best CDP vendors can hope for is to exchange being a “hot product” for being an established category – something that people recognize as a standard component of a complete marketing architecture, alongside other components such as CRM, marketing automation, and Web content management. I’ve long felt that the function provided by CDP – a unified, persistent, sharable customer database – fills a need that won’t go away, regardless of whether the need is filled by stand-alone CDPs or components of larger suites like Adobe Experience Cloud. In other words, the sta[...]



Salesforce Buys MuleSoft and Offers It as a Data Unification Solutions

2018-03-20T21:37:59.775-04:00

The Customer Data Platform industry is doing very well, thank you, with new reports out recently from both Gartner  and Forrester  and the CDP Institute launching its European branch.  But the great question hovering over the industry has been why the giant marketing cloud vendors haven’t brought out their own products and what will happen when they do. Oracle sometimes acts as if their BlueKai Data Management Platform fills the CDP role, while Adobe has made clear they don’t intend to do more than create a shared ID that can link data stored in its separate marketing applications. Salesforce has generally claimed its Marketing Cloud product (formerly ExactTarget) is a CDP, a claim that anyone with experience using the Marketing Cloud finds laughable. The flaws in all these approaches have been so obvious that the question among people who understand the issues has been why the companies haven’t addressed them: after all, the problems must be as obvious to their product strategists as everyone else and the attention gained by CDP makes the gaps in their product offerings even more glaring. My general conclusion has been that the effort needed to rework the existing components of their clouds is too great for the vendors to consider. Remember that the big cloud vendors built their suites by purchasing separate products.  The effort to rebuild those products would be massive and would discard technology those companies spent many billions of dollars to acquire. So rationalization of their existing architectures, along with some strategic obfuscation of their weaknesses, seems the lesser evil.We got a slightly clearer answer to the question on Tuesday when Salesforce announced a $6.5 billion purchase of Mulesoft, a data integration vendor that provides connectors between separate systems. In essence, Salesforce has adopted the Adobe approach of not pulling all customer data into a single repository, but rather connecting data that sits in its system of origin. In Salesforce’s own words, “MuleSoft will power the new Salesforce Integration Cloud, which will enable all enterprises to surface any data—regardless of where it resides—to drive deep and intelligent customer experiences throughout a personalized 1:1 journey.” This is a distinct contrast with the CDP approach, which is to load data from source systems into a separate, unified, persistent database. The separate database has some disadvantages – in particular, it can involve replicating a lot of data – but it also has major benefits. These include storing history that may be lost in source systems, using that history to build derived data elements such as trends and aggregates, and formatting the data in ways that optimized for quick access by marketing systems and other customer-focused applications. Although the difference between these two approaches is clear, some practical compromises can narrow the distance between them. Most CDPs can access external data in place, reducing the amount of data to be moved and allowing the system to use current versions of highly volatile information such as weather, stock prices, or product inventories. Conversely, a system like Mulesoft can push data into a persistent database as easily as it can push it to any other destination, so it can build some version of a persistent database. In fact, many CDPs that started out as tag managers have taken this approach.But pushing data into a persistent database isn’t enough. Mulesoft and similar products work with well-defined inputs and outputs, while CDPs often can accept and store data that hasn’t been mapped to a specific schema. Even more important, I’m unaware of any meaningful ability in Mulesoft to unify customer identities, even using relatively basic approaches such as identity stitching. It’s possible to build workarounds, such as calls to external identity management systems or custom-built matching processes. Again, these are solutions employed by some CD[...]



Picking the Right First Project for Your Customer Data Platform

2018-03-19T02:54:15.750-04:00

For the past year, the most common question about Customer Data Platforms has been how they differ from Data Management Platforms. Recently that seems to have changed.  Today, the question everyone seems to be asking is what project they should pick as their first CDP use case.That’s certainly progress but it’s a much harder question to answer than the one about DMPs. Like any good consultant, I can only answer that question with “it depends” and by then asking more questions. Here are some of the factors that go into a final answer.What resources do you have available? The goal for your initial use case is to get something done quickly that returns substantial value. Getting something done quickly means you want a project that uses existing resources to the greatest degree possible. Ideally, the only new element would be the CDP itself, and even the CDP deployment would use a small number of data sources. So, an ideal first project would use data in existing systems that is well understood, known to be of high quality, and can easily be extracted to feed the CDP. Alternately, the first project might involve new data collected by the CDP itself, such as Web site behaviors captured by the CDP's own page tag. If the first project is purely analytical, such as customer profiling or journey analysis, then you don’t need to worry about connecting to execution systems, although you do need staff resources to properly interpret the data and possibly some analytical or reporting systems. But if you happen to have good execution systems in place, it may make sense for the first project to connect with them. Or, you may pick a CDP that provides its own execution capabilities or can feed lists or offer recommendations to external delivery systems.What use case will provide value? This is where good delivery resources can be helpful: it’s much easier to deliver value with a use case that involves direct customer interaction and, thus, the opportunity to increase revenue or reduce costs. Often this can still be quite simple, such as a change in Web site personalization (involving just one channel for both source and delivery), an event-triggered email, or a prioritized contact list for sales teams. If execution isn’t an option, an analytical project can still be valuable if it presents information that wasn’t previously available. This may mean combining data that was previously kept separate, reformatting data couldn’t be analyzed in its original form, or simply pulling data from an inaccessible system into an accessible database. The trick here is for the analysis to generate insights that themselves can be the basis for action, even if the CDP isn’t part of the execution process.How much organizational change will be needed? Technical obstacles are often less significant barriers than organizational resistance. In particular, it can be difficult to start with projects that cross lines of authority either within marketing (say, separate Web and email teams) or between marketing and other departments (such as operations or customer support). When considering such changes, take into account the needs to revise business processes, to provide adequate training, to align compensation systems with the new goals, to provide reporting systems that track execution, and to measure the value of results. As a practical matter, the fewer parts of the organization affected by the initial project, the easier it will be to deploy and the higher the likelihood of success.Where’s the pain? It’s tempting to search for an initial project that is primarily easy to deploy. But even an easy project is competing with other demands on company resources in general and on staff and managers’ time in particular. So it’s important to pick a first project that solves a problem that’s recognized as important. If the problem is big enough – and it’s clear the CDP can solve it – then you have a good chance of convi[...]



Eager to Sell Your Personal Data? You'll Have to Wait

2018-03-14T13:43:55.399-04:00

Should marketers pay consumers directly to access their personal data? The idea isn’t new but it’s become more popular as people see the huge profits that Google, Facebook, and others make from using that data, as consumers become more aware of the data trade, and as blockchain technology makes low cost micro-payments a possibility. One result is a crop of new ventures based on the concept has popped up like mushrooms – which, like mushrooms, can be hard to tell apart. I’ve been mentioning these in the CDP Institute newsletter as I spot them but only recently found time to take a closer look. It turns out that these things I’ve been lumping together actually belong to several different species. None seem to be poisonous but it’s worth sharing a field guide to help you tell them apart.Before we get into the distinguishing features, let’s look at what these all have in common. They’re all positioned as a way for consumers to get value from their data. I’ve also bumped into a number of data marketplaces that serve traditional data owners, such as Web site publishers and compilers. They can often use some of the same technologies, including micro-payments, blockchain, and crypto-currency tokens. Some even sell personal data, especially if they’re selling ads targeted with such data. Some sell other things, such as streams from Internet of Thing devices. Examples of such marketplaces include Sonobi, Kochava, Narrative I/O, Datonics, Rublix and IOTA. Again, the big difference here is the sellers in the traditional marketplaces are data aggregators, not private individuals.Here’s a look a half-dozen ventures I’ve lumped into the personal data marketplace category (which I suppose needs a three letter acronym of its own).Dabbl turns out to be a new version of an old idea, which is to pay people for taking surveys. There are dozens of these: here's a list.  Dabbl confused me with a headline that said “Everyone’s profiting from your time online but you.” Payment mechanism is old-school gift cards. On the plus side: unlike most products in this list, Dabble is up and running.Thrive pays users for sharing their data, but only in the broad sense that they are paid to fill out profiles which are exposed to advertisers when the users visit participating Web sites. The advertisers are paying Thrive; individual users aren’t deciding who sees their data or paid to grant access on a buyer-by-buyer basis. Payments are made via a crypto-token which is on sale as I write this. The ad marketplace is scheduled for launch at the end of 2018. That sequence suggests there’s at least a little cryptocurrency speculation in the mix. (Another hint: they’re based in Malta. Yet another hint: the U.S. Securities Exchange Commission won’t let you buy the tokens.)Nucleus Vision is also in the midst of its token sale.  But they’re much more interested in discussing a propriety technology that detects mobile phones as they enter a store and shares the owner’s data using blockchain as an exchange, storage, and authorization mechanism. Store owners can then serve appropriate offers to visitors. This sounds like a lot of other products except that Nucleus’ technology does it without a mobile app. (It does apparently need some cooperation from the mobile carrier.) Rewards are paid in tokens which can be earned for store visits, by using coupons or discounts, by making purchases, or by selling data. Each retailer runs its own program, so this isn’t a marketplace where different buyers bid for each consumer’s data.  Sensors are currently running in a handful of stores and the loyalty and couponing systems are under development.Momentum is an outgrowth of the existing MobileBridge loyalty system.  It rewards customers with yet another crypto-token (on sale in late April) for marketer-selected behaviors. Brands can play as well as retailers but it’s still the s[...]



State of Customer Data Platforms in Europe

2018-03-04T14:38:01.901-05:00

The Customer Data Platform Institute will be launching its European branch later this month with a series of presentations in London, Amsterdam and Hamburg. We’ve seen considerable CDP activity in Europe – nearly one quarter of the CDPs in the Institute's latest industry update are Europe-based, several others with European roots have added a U.S. headquarters, and some of U.S.-based CDPs  have significant European business. A recent analysis of CDP Institute membership also found that one quarter of our individual members are in Europe. So what, exactly, is the state of CDP in Europe? It’s long been an article of faith on both sides of the Atlantic that the U.S. market is ahead of Europeans on marketing technology in general and customer data management in particular. That (plus the larger size of the U.S. market) is why so many European vendors have relocated to the U.S. This study from Econsultancy suggests the difference is overstated if it exists at all: 9% of European countries reported a highly integrated tech stack, barely under the 10% figure for North American companies. North American firms were actually more likely to report a fragmented approach (48% vs 42%), although that was only because European countries were more concentrated in the least advanced category (“little or no cloud based technology”) by 20% vs 13%. The assumption that cloud-based technology is synonymous with advanced martech is debatable but, then again, the survey was sponsored by Adobe.  What is clear is that European firms have generally lagged the U.S. in cloud adoption -- see, for example, this report from BARC Research.Lower cloud use probably hasn’t directly impeded CDP deployment: although nearly all CDPs are cloud-based, a substantial number offer an on-premises option. (The ratio was seven out of 24 in the CDP Institute’s recent vendor comparison report, including nearly all of the Europe-based CDPs.) But the slower cloud adoption may be a hint of the generally slower pace of change among European IT departments, which could itself reduce deployment of CDPs.A Salesforce survey of IT professionals supports this view. Answers to questions about leading digital transformation, being driven by customer expectations, and working closely with business units all found that U.S. IT workers are slightly but distinctly more business-oriented than their European counterparts. Interestingly, there’s a split within the European respondents: UK and Netherlands are more similar to the U.S. answers than France and Germany. I should also point out that I’ve highlighted questions where the U.S. and European answers were significantly different – there were quite a few other questions where the answers were pretty much the same.Organizational silos outside of IT are another barrier to CDP adoption. A different Salesforce survey, this one of advertising managers, also found that North American firms are generally more integrated than their European counterparts. The critical result from a martech perspective is North American marketing and advertising departments were much more likely to collaborate on buying technology.Then again, a Marketo survey found that European respondents (from a mix of IT, marketing, sales, and service departments) were generally more satisfied with their tools and performance, even though they lagged North Americas in slightly innovation and more clearly in strategic alignment with corporate objectives. This isn’t necessarily inconsistent with the previous results: being less integrated with other departments may free the Europeans to pursue their departmental goals more effectively, even if they’re less fully aligned with corporate objectives. Other surveys have given similar results: people are generally happier with technology when they buy it for themselves.Not surprisingly, one area where the Europeans are clearly ahead i[...]



Will CDP Buyers Consider Private Clouds as On-Premise Deployment?

2018-02-26T18:49:43.441-05:00

Most Customer Data Platforms are Software as a Service products, meaning they run on servers managed by the vendor. But some clients prefer to keep their data in-house. So before releasing the CDP Vendor Comparison report – now available here – I added a line for on-premises deployment.This seemed like a perfect fit: a clear yes/no item that some buyers consider essential. But it turned out to raise several issues:- on-premises vs on-premise. I originally used “on-premise”, which is how the term is typically rendered. One of the commenters noted this is a common error. A bit of research showed it’s been a topic of discussion but on-premise is now more widely used relating to computer systems.  On-premises actually sounds a bit pedantic to me, but I’m using it to avoid annoying people who care. (Interestingly, no one seems too concerned about whether to use the hyphen. I guess even grammar geeks pick their battles.)- private clouds. Several vendors argued that on-premises is an old-fashioned concept that’s largely been replaced by private clouds as a solution for companies that want to retain direct control over their systems and data. This resonated: I recalled seeing this survey from 451 Research showing that conventional on-premises [they actually used “on-premise”] deployments now account for just one-quarter of enterprise applications and the share is shrinking. table.tableizer-table { font-size: 12px; border: 1px solid #FFF; font-family: Arial, Helvetica, sans-serif; } .tableizer-table td { padding: 4px; margin: 3px; border: 1px solid #FFF; } .tableizer-table th { background-color: #104E8B; color: #FFF; font-weight: bold; } Percentage of Applications by Venue:24% Conventional (on-premise, non-cloud)18% on-premise private cloud15% hosted private cloud14% public cloud13% off-premise non-cloudSource: 451 Research, Strategy Briefing: Success Factors for Managing Hybrid IT, 2017My initial interpretation of this was the on-premises private clouds meet the same goals as conventional on-premises deployments, in the sense of giving the company’s IT department complete control. But in discussions with CDP vendors, it turned out that they weren’t necessarily differentiating between on-premises private clouds and off-premise private clouds, which might be running on private servers (think: Rackspace) or as “virtual private servers” on public clouds (think: Amazon Web Services). Clearly there are different degrees of control involved in each of these and companies that want an on-premises solution probably have their limits on how far they’ll go in the private cloud direction.- public clouds. One vendor speculated that most remaining conventional deployments are old systems that can’t be migrated to the cloud. The implication was that buyers who could run a CDP in the cloud would gladly do this instead of insisting on an on-premises configuration. This survey from Denodo suggested otherwise: while it found that 77% of respondents were using a public cloud and 50% were using a virtual private cloud, it also found that 68% are NOT storing “sensitive data” in the public cloud. Presumably the customer data in a CDP qualifies as sensitive. I don't know whether the respondents would consider a “virtual private cloud” as part of the public cloud.  But I think it’s reasonable to assume that a considerable number of buyers reject external servers of any sort as an option for CDP deployment, and that “on-premises” (including on-premises private clouds) is a reasonable term to describe their preferred configuration.[...]



How Customer Data Platforms Help with Marketing Performance Measurement

2018-02-19T20:48:10.743-05:00

John Wanamaker, patron saint of marketing measurement.If you’ve been following my slow progress towards a set of screening questions for Customer Data Platforms, you may recall that “incremental attribution” was on the list. The original reason was that some of the systems I first identified as CDPs offered incremental attribution as their primary focus. Attribution also seemed like a specific enough feature that it could be meaningfully distinguished from marketing measurement in general, which nearly any CDP could support to some degree.But as I gathered answers from the two dozen vendors who will be included the CDP Institute’s comparison report, I found that at best one or two provide the type of attribution I had in mind.  This wasn't enough to include in the screening list.  But there was an impressive variety of alternative answers to the question.  Those are worth a look.- Marketing mix models.  This is the attribution approach I originally intended to cover. It gathers all the marketing touches that reach a customer, including email messages, Web site views, display ad impressions, search marketing headlines, and whatever else can be captured and tied to an individual. Statistical algorithms then look at customers who had a similar set of contacts except for one item and attribute any difference in performance to that.  In practice, this is much more complicated than it sounds because the system needs to deal with different levels of detail and intelligently combine cases that lack enough data to treat separately.  The result is an estimate of the average value generated by incremental spending in each channel. These results are sometimes combined with estimates created using different techniques to cover channels that can’t be tied to individuals, such as broadcast TV. The estimates are used to find the optimal budget allocation across all channels, a.k.a. the marketing mix. - Next best action and bidding models.  These also estimate the impact of a specific marketing message on results, but work at the individual rather than channel levels. The system uses a history of marketing messages and results to predict the change in revenue (or other target behavior) that will result from sending a particular message to a particular individual. One typical use is deciding how much to bid for a display ad impression; another is to choose products or offers to make during an interaction. They differ from incremental attribution because they create separate predictions for each individual based on their history and the current context. Several CDP systems offer this type of analysis.  But it’s ultimately not different enough from other predictive analytics to treat it as a distinct specialty.- First/last/fractional touch.  These methods use the individual-level data about marketing contacts and results, but apply fixed rules to allocate credit.  They are usually limited to online advertising channels.  The simplest rules are to attribute all results to either the first or last interaction with a buyer.  Fractional methods divide the credit among several touches but use predefined rules to do the allocation rather than weights derived from actual data.  These methods are widely regarded as inadequate but are by far the most commonly used because alternatives are so much more difficult.  Several CDPs offer these methods.  - Campaign analysis. This looks at the impact of a particular marketing campaign on results. Again, the fundamental method is to compare performance of individuals who received a particular treatment with those who didn’t. But there’s usually more of an effort to ensure the treated and non-treated groups are comparable, either by setting up a/b test splits in advance or by analyzing results for different segment[...]



Will GDPR Hurt Customer Data Platforms and the Marketers Who Use Them?

2018-02-18T14:51:44.727-05:00

Like an imminent hanging, the looming execution of the European Union’s General Data Protection Regulation (GDPR) has concentrated business leaders’ minds on their customer data. This has been a boon for Customer Data Platform vendors, who have been able to offer their systems as solutions to many GDPR requirements. But it raises some issues as well.First the good news: CDPs are genuinely well suited to help with GDPR. They’re built to solve two of GDPR’s toughest technical challenges: connecting all internal sources of customer data and linking all data related to the same person. In particular, CDPs focus on first party (i.e., company-owned) personally identifiable information and use deterministic matching to ensure accurate linkages. Those are exactly what GDPR needs. Some CDP vendors have added GDPR-specific features such as consent gathering, usage tracking, and data review portals. But those are relatively easy once you’ve assembled and linked the underlying data.GDPR is also good for CDPs in broader ways. Most obviously, it raises companies’ awareness of customer data management, which is the core CDP use case. It will also raise consumers' awareness of their data and their rights, which should lead to better quality customer information as consumers feel more confident that data they provide will be handled properly. (See this Accenture report that 75% of consumers are willing to share personal data if they can control how it’s used, or this PegaSystems survey in which 45% of EU consumers said they would erase their data from a company that sold or shared it with outsiders.)  Conversely, GDPR-induced constraints on acquiring external data should make a company’s own data that much more valuable. Collection requirements for GDPR should also make it easier for companies to tailor the degree of personalization to individual preferences.  This Adobe study found that 28% of consumers are not comfortable sharing any information with brands and 26% say that too-creepy personalization is their biggest annoyance with brand content. These results suggest there’s a segment of privacy-focused consumers who would value a privacy-centric marketing approach. (That this approach would itself require sophisticated personalization technology is an irony we marketers can quietly keep to ourselves.)So, what's not to like?  The downside to GDPR is that greater corporate interest in customer data means that marketers will not be left to manage it on their own.  Marketing departments have been the primary buyers of Customer Data Platforms because corporate IT often lacks the interest and skills needed to meet marketing needs.  GDPR and digital transformation don't give IT new resources but they do mean it will be more involved.  Indeed, this report from data governance vendor Erwin  found that responsibility for meeting data regulations is held by IT alone at 36% of companies and is shared between IT and all business units (not just marketing) at another 55%.  I’ve personally heard many recent stories about corporate IT buying CDPs. Selling to IT departments isn’t a problem for CDP vendors. Their existing technology should work with little change.  At most, they'll need to retool their sales and marketing. But marketers may suffer more. Corporate IT will have its own priorities and marketing won’t be at the top of the list. For example, this report from master data management vendor Semarchy found that customer experience, service and loyalty applications take priority over sales and marketing applications. More broadly, studies like this one from ComputerWorld consistently show that IT departments prioritize productivity, security and compliance over customer experience and analytics. Putting IT and legal departments in ch[...]



Celebrus CDP Offers In-Memory Profiles

2018-02-02T19:24:27.511-05:00

It’s almost ten years to the day since I first wrote about Celebrus, which then called itself speed-trap (a term that presumably has fewer negative connotations in the U.K. than the U.S.). Back then, they were an easy-to-deploy Web site script that captured detailed visitor behaviors. Today, they gather data from all sources, map it to a client-tailored version of a 100+ table data model, and expose the results to analytics and customer engagement systems as in-memory profiles. Does that make them a Customer Data Platform? Well, Celebrus calls itself one – in fact, they were an early and enthusiastic adopter of the label. More important, they do what CDPs do: gather, unify, and share customer data. But Celebrus does differ in several ways from most CDP products:- in-memory data. When Celebrus described their product to me, it sounded like they don’t keep a persistent copy of the detailed data they ingest. But after further discussion, I found they really meant they don’t keep it within those in-memory profiles. They can actually store as much detail as the client chooses and query it to extract information that hasn't been kept in memory.  The queries can run in real time if needed. That’s no different from most other CDPs, which nearly always need to extract and reformat the detailed data to make it available. I’m not sure why Celebrus presents themselves this way; it might be that they have traditionally partnered with companies like Teradata and SAS that themselves provided the data store, or that they partnered with firms like Pega, Salesforce, and Adobe that positioned themselves as the primary repository, or simply to avoid ruffling feathers in IT departments that didn't want another data warehouse or data lake.  In any case, don’t let this confuse you: Celebrus can indeed store all your detailed customer data and will expose whatever parts you need.- standard data model. Many CDPs load source data without mapping it to a specific schema. This helps to reduce the time and cost of implementation. But mapping is needed later to extract the data in a usable form. In particular, any CDP needs to identify core bits of customer information such as name, address, and identifiers  that connect records related to the same person. Some CDPs do have elaborate data models, especially if they’re loading data from specific source systems or are tailored to a specific industry.  Celebrus does let users add custom fields and tables, so its standard data model doesn’t ultimately restrict what the system can store.- real-time access.  The in-memory profiles allow external systems to call Celebrus for real-time tasks such as Web site personalization or bidding on impressions..  Celebrus also loads, transforms, and exposes its inputs in real time.  It isn't the only CDP to do this, but it's one of just a few..Celebrus is also a bit outside the CDP mainstream in other ways. Their clients have been largely concentrated in financial services, while most CDPs have sold primarily to online and offline retailers. While most CDPs run as a cloud-based service, Celebrus supports cloud and on-premise deployments, which are preferred by many financial services companies.  Most CDPs are bought by marketing departments, but Celebrus is often purchased by customer experience, IT, analytics, and digital transformation teams and used for non-marketing applications such as fraud detection and system performance monitoring.Other Celebrus features are found in some but not most CDPs, so they’re worth noting if they happen to be on your wish list. These include ability to scan for events and issue alerts; handling of offline as well as online identity data; and specialized functions to comply with the European Union’s GDPR p[...]



Collapse of Civilization Makes Marketers' Jobs Harder

2018-01-30T08:36:20.372-05:00

Political situations come and go but trust is the foundation of civilization itself. So I was genuinely shaken to see a report from the Edelman PR agency that trust in U.S. institutions fell last year by a huge margin – 17% for the general public and 34% for the “informed public,” placing us dead last among 27 countries. All four measured institutions (business, media, government, and non-governmental organizations) took similar hits, although government fell the most.You won’t be surprised to learn that concerns about fake news and social media are especially prominent. What’s less expected is that trust in traditional journalism actually increased in the U.S. The over-all decline in media trust resulted from falling confidence in news from search engines and social media. Similarly, world-wide trust increased in traditional authorities such as technical, academic, and business experts.  So there are rays of hope.Digging deeper, the sharp fall in U.S. trust levels follows two years when levels were exceptionally high. The current U.S. trust level is roughly the same as the four reports before that. Maybe you shouldn't head for that doomsday cabin quite yet. Still, other reports also show tremendous doubts about basic questions of truth. A Brand Intelligence study comparing brand attitudes of Democrats vs. Republicans found that eight of top 10 most polarizing brands were news outlets. World-wide, 59% of people told Edelman they were simply not sure what is true and just 36% felt the media were doing a good job of guarding information quality.The social implications of all this are sadly obvious.  But this blog is about marketing. How can marketers adapt and thrive in a trust-challenged, politically-polarized world?Protect privacy. Consumers can be remarkably cavalier in practice about protecting their data: this McAfee report found 41% don’t immediately change default passwords on new devices and 34% don’t limit access to their home network at all. But they are adamant that companies they do business with be more careful: Accenture studies have found that 92% of U.S. consumers feel it’s extremely important for companies to protect their personal information while 80% won’t do business with companies they don’t trust. Similarly, a Pega survey found that 45% of EU respondents said they would require companies to erase their data if they found it had been sold or shared with other companies. Personalize wisely. Accenture also found  that 44% of consumers are frustrated when companies don’t deliver relevant, personalized shopping experiences and 41% had switched companies due to lack of personalization or trust. So there’s clearly a price to be paid for not using the data you do collect. Similarly, an Oracle report found 50% of consumers would be attracted to offers based on personal data while just 29% would find them creepy.  In fact, consumers have a remarkably pragmatic attitude toward their data: 24[7] survey found their number one reason for sharing personal information is to receive discounts. This has two implications: ask consumers whether they want personalized messages (or any messages), and be sure the value of your personalization outweighs its inherent creepiness. Use trusted media. Consumers’ attitudes towards media in general, and social media in particular, are complicated. We’ve already noted that Edelman found growing distrust in online platforms. Other studies by Kantar and Sharethrough found the same. But consumers still spend most of their time on search engines and social media, which GlobalWebindex found remain by far the top research channels. Yet when it comes to building awareness, a different GlobalWebIndex report found that social ranked far b[...]



Simple Questions to Screen Customer Data Platform Vendors

2018-01-24T16:10:09.996-05:00

I’ve been working for months to find a way to help marketers understand the differences between Customer Data Platform vendors. After several trial balloons and with considerable help from industry friends, I recently published a set of criteria that I think will do the job. You can see the full explanation on the CDP Institute blog. But, since this blog has its own readership I figured I’d post the basics here as well.The primary goal is give marketers a relatively easy way to decide which CDPs are likely to meet their needs. To do this I’ve come up with a a small list of features that relate directly to working with particular data sources and supporting particular applications. The theory is that marketers know what sources and applications they need to support, even if they're not experts in the fine points of CDP technology.In other words, read these items as meaning: if you want your CDP to support [this data type or application] then it should have [this feature].Obviously this list covers just a tiny fraction of all possible CDP features. It’s up to marketers to dig into the details of each system to determine how well it supports their specific needs.  We have detailed lists of CDP features in the Evaluation section of the CDP Institute Library.The final list also includes a few features that are present in all CDPs (or, more precisely, in all systems that I consider a CDP – we can’t control what vendors say about themselves). These are presented since there’s still some confusion about how CDPs differ from other types of systems. Now that the list is set, the next step is to research which features are actually present in which vendors and publish the results. That will take a while but when it’s done I’ll certainly announce it here.Here’s the list:Shared CDP Features: Every CDP does all of these. Non-CDPs may or may not. Retain original detail. The system stores data with all the detail provided when it was loaded. This means all details associated with purchase transactions, promotion history, Web browsing logs, changes to personal data, etc. Inputs might be physically reformatted when they’re loaded into the CDP but can be reconstructed if needed.Persistent data. The system retains the input data as long as the customer chooses. (This is implied by the previous item but is listed separately to simplify comparison with non-CDP systems.)Individual detail. The system can access all detailed data associated with each person. (This is also implied by the first item but is a critical difference from systems that only store and access segment tags on customer records.)Vendor-neutral access. All stored data can be exposed to any external system, not only components of the vendor’s own suite. Exposing particular items might require some set-up and access is not necessarily a real time query. Manage Personally Identifiable Information (PII). The system manages Personally Identifiable Information such as name, address, email, and phone number. PII is subject to privacy and security regulations that vary based on data type, location, permissions, and other factors. Differentiating CDP Features: A CDP doesn’t have to do any of these although many do some and some do many. These are divided into three subclasses: data management, analytics, and customer engagement.Data Management. These are features that gather, assemble, and expose the CDP data.     Base Features. These apply to all types of data. API/query access. External systems can access CDP data via an API or standard query language such as SQL. It’s just barely acceptable for a CDP to not offer this function and instead provide access through data extracts. But API or query access is much pr[...]



The Light Bulbs Have Ears: Why Listening Is Voice-Activated Devices' Most Important Skill

2018-01-15T16:26:13.666-05:00

If one picture’s worth a thousand words, why is everyone rushing to replace graphical interfaces with voice-activated systems?The question has an answer, which we’ll get to below. But even though the phrasing is a bit silly, it truly is worth asking. Anyone who’s ever tried to give written driving directions and quickly switched to drawing a map knows how hard it is to accurately describe any process in words. That’s why research like this study from Invoca shows consumers only want to engage with chatbots on simple tasks and quickly revert to speaking to a human for anything complicated. And it’s why human customer support agents are increasingly equipped with screen sharing tools that let them see what their customer is seeing instead of just talking about it.Or to put it another way: imagine a voice-activated car that uses spoken commands to replace the steering wheel, gear shifter, gas and brake pedals. It’s a strong candidate for Worst Idea Ever. Speaking the required movements is much harder than making them movements directly. By contrast, the idea of a self-driving car is hugely appealing. That car would also be voice-activated, in the sense that you get in and tell it where to go. The difference between the two scenarios isn’t vocal instructions or even the ability of the system to engage in a human-like conversation. Some people might like their car to engage in witty banter, but those with human friends would probably rather talk with them by phone or spend their ride quietly. A brisk “yes, ma'am” and confirmation that the car understood the instructions correctly should usually suffice.What makes the self-driving car appealing isn’t that it can listen or speak, but that it can act autonomously. And what makes that autonomy possible is situational awareness – the car's ability to understand its surrounding environment, including its occupant’s intentions, and to respond appropriately. The same is ultimately true of other voice-activated devices. If Alexa and her cousins could only do exactly what we told them, they’d be useful in limited situations – say, to turn on the kitchen lights when your hands are full of groceries. But their exciting potential is to do much more complicated things on their own, like ordering those groceries in the first place (and, eventually, coordinating with other devices to receive the grocery delivery, put the groceries in the right cabinets, prepare a delicious dinner, and clean the dishes). This autonomy only happens if the devices really understand what we want and how to make it happen. Some of that understanding comes from artificial intelligence but the real limit is the what data the AI has available to process. So I’d argue that the most important skill of the voice-activated devices is really listening.  That’s how they collect the data they need to act appropriately. And the larger vision is for all these devices to pool the information they gather, allowing each device to do a better job by itself and in cooperation with the others. Whether you want to live in a world where the walls, cars, refrigerators, thermostats, doorknobs, and light bulbs all have ears is debatable. But that’s where we’re headed, barring some improbable-but-not-impossible Black Swan event that changes everything. (Like, say, a devastating security flaw in nearly every microprocessor on the planet that goes undetected for years…wait, that just happened.) Still, in the context of this blog, what really matters is how it all affects marketers. From that perspective, voice interfaces are highly problematic because they make advertising much harder: instead of passively lurking in the corners of a computer s[...]



What's Next for Customer Data Platforms? New Report Offers Some Clues.

2018-01-03T08:47:12.461-05:00

The Customer Data Platform Institute released its semi-annual Industry Update today. (Download it here).  It’s the third edition of this report, which means we now can look at trends over time. The two dozen vendors in the original report have grown about 25% when measured by employee counts in LinkedIn, which is certainly healthy although not the sort of hyper growth expected from an early stage industry. On the other hand, the report has added two dozen more vendors, which means the measured industry size has doubled. Total employee counts have doubled too. Since many of the new vendors were outside the U.S., LinkedIn probably misses a good portion of their employees, meaning actual growth was higher still.The tricky thing about this report is that the added vendors aren’t necessarily new companies. Only half were founded in 2014 or later, which might mean they’ve just launched their products after several years of development. The rest are older. Some of these have always been CDPs but just recently came to our attention. This is especially true of companies from outside the U.S. But most of the older firms started as something else and reinvented themselves as CDPs, either through product enhancements or simply by adopting the CDP label. Ultimately it’s up to the report author (that would be me) to decide which firms qualify for inclusion.   I’ve done my best to list only products that actually meet the CDP definition.*   But I do  give the benefit of the doubt to companies that adopted the label. After all, there’s some value in letting the market itself decide what’s included in the category.What’s most striking about the newly-listed firms is they are much more weighted towards customer engagement systems than the original set of vendors. Of the original two dozen vendors, eleven focused primarily on building the CDP database, while another six combined database building with analytics such as attribution or segmentation. Only the remaining seven offered customer engagement functions such as personalization, message selection, or campaign management. That’s 29%.** By contrast, 18 of the 28 added vendors offer customer engagement – that’s 64%. It’s a huge switch. The added firms aren’t noticeably younger than the original vendors, so this doesn’t mean there’s a new generation of engagement-oriented CDPs crowding out older, data-oriented systems. But it does mean that more engagement-oriented firms are identifying themselves as CDPs and adding CDP features as needed to support their positioning. So I think we can legitimately view this as validation that CDPs offer something that marketers recognize they need. What we don’t know is whether engagement-oriented CDPs will ultimately come to dominate the industry. Certainly they occupy a growing share. But the data- and analysis-oriented firms still account for more than half of the listed vendors (52%) and even higher proportions of employees (57%), new funding (61%) and total funding (74%).  So it’s far from clear that the majority of marketers will pick a CDP that includes engagement functions. So far, my general observation has been that engagement-oriented CDPs appeal more to mid-size firms while data and analysis oriented CDPs appeal most to large enterprises. I think the reason is that large enterprises already have good engagement systems or prefer to buy such systems separately. Smaller firms are more likely to want to replace their engagement systems at the same time they add a CDP and want to tie the CDP directly to profit-generating engagement functions. Smaller firms are also more sensitive to integration costs,[...]



Surprising Results in Customer Data Platform Survey

2017-12-19T21:52:09.594-05:00

The Customer Data Platform Institute (CDPI) recently surveyed its members about their customer-facing systems and CDP deployment plans.  (Click here to download the full report.)  While CDP Institute members are obviously not typical marketers (being smarter, richer, and better looking), the answers still provide some intriguing insights into the marketplace. Let’s start with the general state of customer facing systems. One-third reported they had many disconnected systems, just over one-third reported (37%) they had many systems connected to a central platform of some sort (9% unified customer database, 9% unified database and orchestration system, or 19% marketing automation or CRM platform), 6% said one system does almost everything, and the remaining 23% said they had some other configuration or didn’t know. I’ve compared these results below with several other surveys that asked similar questions. The one thing that immediately jumps out in this comparison is the CDPI Member survey showed a much lower percentage of replies for “one primary system”. Otherwise, the answers across all surveys are very roughly similar, showing about 30% to 50% of companies having many connected systems and many disconnected systems. This suggests more integration than I’d expect, but it depends on how much integration those “many connected” systems really represent. The CDPI Member survey also asked about plans regarding Customer Data Platforms. Nineteen percent had a CDP already deployed, 18% had deployment in progress, and 26% planned to start deployment within the next 12 months. The remaining 38% either planned to start after 12 months (4%), had no plans to deploy (19%) or didn’t know (14%). I haven’t seen any other survey that asks this question but have no doubt that CDP deployment and plans are much higher for CDPI members than the rest of the industry average.Where things get really interesting is when we explore how the same people answered these two questions. At first blush, you’d assume the 19% with a deployed CDP would be the same 18% who said they had many systems connected to a unified customer platform, either by itself or with an orchestration engine attached. Not so much. Here’s the actual cross tab of the results.What you see (in the yellow cells) is just 42% of the people who said they had a deployed CDP also said they had many systems connected to a unified database. If we allow that a deployed CDP could be present in companies where one customer-facing system does almost everything or the customer facing systems are connected to marketing automation or CRM, then 74% of the deployed CDPs are covered.I take the remaining 26% as a healthy reminder that just having a CDP doesn’t guarantee all your systems will connect to that CDP, either by feeding data into it or reading data from it. In fact, we know that many CDPs support analytics without being connected to delivery systems, so this really shouldn’t come as a surprise.On a more encouraging note, as the green cells highlight, a good majority of in-process and planned CDP deployments are at companies with many disconnected systems or many systems connected to a marketing automation or CRM.  Those are the companies most in need of data unification. So it does appear that the CDP message is reaching its target audience and CDPs are being used as intended.The survey also asked about company revenue, business type (B2B vs B2C), and region. Comparing those with current systems and CDP deployment also gave some interesting and unexpected results. But there's no point in repeating them here since you can download the full repor[...]



Here's a Game to Illustrate Strategic Planning

2017-12-06T23:14:54.242-05:00

My wife is working on a Ph.D. in education and recently took a course on strategic planning for academic institutions. Her final project included creating a game to help illustrate the course lessons. What she came up with struck me as applicable to planning in all industries, so I thought I’d share it here.The fundamental challenge she faced in designing the game was to communicate key concepts about strategic planning. The main message was that strategic planning is about choosing among different strategies to find the one that best matches available resources. That’s pretty abstract, so she made it concrete by presenting game players with a collection of alternative strategies, each on a card of its own. She then created a second set of cards that listed actions available to the players. Each action card showed which strategies the action supported and what resources it required. There were four resources: money, faculty, students, and administrative staff.  To keep things simple, she assumed that total resources were fixed, that each strategy contributed equally to the ultimate goal, and that each action contributed equally to whichever strategies it supported.  In other words, the components of the game were:- One goal. In the case of my wife’s game, the goal was to achieve a “top ten” ranking for a particular department within a university. (It was a good goal because it was easily understood and measured.)- Four strategies. In my wife’s game, the options were to build up the department, cooperate with other departments at the university, cooperate with other universities, or promote the department to the media and general public. - A dozen actions. Each action supported at least one strategy (scored with 1 or 0) and consumed some quantity of the four resources (scored from 0 to 3 for each resource). Actions were things like “run a conference”, “set up a cross-disciplinary course” and “finance faculty research”.- Four resources, each assigned an available quantity (i.e., budget).As you can tell from the description, the action cards are the central feature of the game.  Here's a concrete example, where each row represents one action card:The fundamental game mechanism was to pick a set of actions.  These were scored by counting how how many supported each strategy and how many resources they consumed.  The resource totals couldn't exceed the available quantities for each resource.  The table below shows scoring for a set of three actions.  In this particular example, all three actions support "cooperate with other departments", while two support "build department" and one each supports "cooperate with other universities" and "promote to public".  Resource needs were money=8, faculty=6, student=5 and administration= 1.  Someone with these cards could choose "cooperate with other departments" as the best strategy -- if the resources permitted.  But if they were limited to 7 points for each resource, they might switch the "fund scholarship" card for the "extracurricular enrichment" card, which uses less money even though it consumes more of the other resources.  That works because, with a budget of 7 for each resource, the player can afford to increase spending in the other categories.As this example suggests, the goal of the game is to get players to think about the relations among strategies, actions and resources, and in particular how to choose actions that fit with strategies and resources.Although the basic scoring approach is built into the game, there are many ways my wife could have played it: - Predefine available r[...]



2017 Retrospective: Things I Didn't Predict

2017-12-02T10:32:32.131-05:00

It’s the time of year when people make predictions. It’s not my favorite exercise: the best prediction is always that things will continue as they are, but what’s really interesting is change – and significant change is inherently unpredictable. (See Nassim Nicholas Taleb's The Black Swan  and Philip Tetlocks' Superforecasting on those topics.)So I think instead I’ll take a look at surprising changes that already happened. I’ve covered many of these in the daily newsletter of the Customer Data Platform Institute (click here to subscribe for free). In no particular order, things I didn’t quite expect this year include:- Pushback against the walled garden vendors (Facebook, Google, Amazon, Apple, etc.) Those firms continue to dominate life online, and advertising and ecommerce in particular. (Did you know that Amazon accounted for more than half of all Black Friday sales last week?) But the usual whining about their power from competitors and ad buyers has recently been joined by increasing concerns among the public, media, and government. What’s most surprising is it took so long for the government to recognize the power those companies have accrued and the very real threat they pose to governmental authority. (See Martin Gurri’s The Revolt of the Public for by far the best explanation I’ve seen of how the Internet affects politics.)  On the other hand, the concentrated power of the Web giants means they could easily converted into agents of control if the government took over.  Don’t think this hasn’t occurred to certain (perhaps most) people in Washington.  Perhaps that’s why they’re not interested in breaking them up.  Consistent with this thought: the FCC plan to end Net Neutrality will give much more power to cable companies, which as highly regulated utilities have a long history of working closely with government authorities. It’s pitifully easy to imagine the cable companies as enthusiastic censors of unapproved messages.- Growth in alternative personal data sources. Daily press announcements include a constant stream of news from companies that have found some new way to accumulate data about where people are going, who they meet, what they’re buying, what they plan to buy, what content they’re consuming, and pretty much everything else. Location data is especially common, derived from mobile apps that most people surely don’t realize are tracking them. But I’ve seen other creative approaches such as scanning purchase receipts (in return for a small financial reward, of course) and even using satellite photos to track store foot traffic. In-store technology such as beacons and wifi track behaviors even more precisely, and I’ve seen some fascinating (and frightening) claims about visual technologies that capture peoples’ emotions as well as identities. Combine those technologies with ubiquitous high resolution cameras, both mounted on walls and built into mobile devices, and the potential to know exactly who does and thinks what is all too real. Cross-device matching and cross-channel identity matching (a.k.a. “onboarding”) are part of this too.- Growth in voice interfaces. Voice interfaces don't have the grand social implications of the preceding items but it’s still worth noting that voice-activated devices (Amazon Alexa and friends) and interfaces (Siri, Cortana, etc.) have grown more quickly than I anticipated. The change does add new challenges for marketers who were already having a hard time figuring out where to put ads on a mobile phone screen.  With voice, they have no screen a[...]



Do Customer Data Platforms Need Identity Matching? The Answer May Surprise You.

2017-11-22T14:38:48.338-05:00

I spend a lot of time with vendors trying to decide whether they are, or should be, a Customer Data Platform. I also spend a lot of time with marketers trying to decide which CDPs might be right for them. One topic that’s common in both discussions is whether a CDP needs to include identity resolution – that is, the ability to decide which identifiers (name/address, phone number, email, cookie ID, etc.) belong to the same person.It seems like an odd question. After all, the core purpose of a CDP is to build a unified customer database, which requires connecting those identifiers so data about each customer can be brought together. So surely identity resolution is required.Turns out, not so much. There are actually several reasons.- Some marketers don’t need it. Companies that deal only in a single channel often have just one identifier per customer.  For example, Web-only companies might use just a cookie ID.  True, channel-specific identifiers sometimes change (e.g., cookies get deleted).  But there may be no practical way to link old and new identifiers when that happens, or marketers may simply not care.  A more common situation is companies have already built an identity resolution process, often because they’re dealing with customers who identify themselves by logging in or who transact through accounts. Financial institutions, for example, often know exactly who they’re dealing with because all transactions are associated with an account that’s linked to a customer's master record (or perhaps not linked because the customer prefers it that way). Even when identity resolution is complicated,  mature companies often (well, sometimes) have mature processes to apply a customer ID to all data before it reaches the CDP. In any of these cases, the CDP can use the ID it’s given and not need an identity resolution process of its own.- Some marketers can only use it if it’s perfect. Again, think of a financial institution: it can’t afford to guess who’s trying to take money out of an account, so it requires the customer to identify herself before making a transaction. In many other circumstances, absolute certainty isn’t required but a false association could be embarrassing or annoying enough that the company isn’t willing to risk it. In those cases, all that’s needed is an ability to “stitch” together identifiers based on definite connections. That might mean two devices are linked because they both sent emails using the same email address, or an email and phone number linked because someone entered them both into a registration form. Almost every CDP has this sort of “deterministic” linking capability, which is so straightforward that it barely counts as identity resolution in the broader sense.- Specialized software already exists. The main type of matching that CDPs do internally – beyond simple stitching – is “fuzzy” matching.  This applies rules to decide when two similar-looking records really refer to the same person. It's most commonly applied to names and postal addresses, which are often captured inconsistently from one source to the next. It might sometimes be applied to other types of data, such as different forms of an email address (e.g. draab@raabassociates.com and draab@raabassociatesinc.com). The technology for this sort of matching gets very complicated very quickly, and it’s something that specialized vendors offer either for purchase or as a service. So CDP vendors can quite reasonably argue they needn’t build this for themselves but should simply int[...]



No, Users Shouldn't Write Their Own Software

2017-11-10T11:58:03.654-05:00

Salesforce this week announced “myEinstein” self-service artificial intelligence features to let non-technical users build predictive models and chatbots. My immediate reaction was that's a bad idea: top-of-the-head objections include duplicated effort, wasted time, and the potential for really bad results. I'm sure I could find other concerns if I thought about it, but today’s world brings a constant stream of new things to worry about, so I didn’t bother. But then today’s news described an “Everyone Can Code” initiative from Apple, which raised essentially the same issue in even clearer terms: should people create their own software?I thought this idea had died a well-deserved death decades ago. There was a brief period when people thought that “computer literacy” would join reading, writing, and arithmetic as basic skills required for modern life. But soon they realized that you can run a computer using software someone else wrote!* That made the idea of everyone writing their own programs seem obviously foolish – specifically because of duplicated effort, wasted time, and the potential for really bad results. It took IT departments much longer to come around the notion of buying packaged software instead of writing their own but even that battle has now mostly been won. Today, smart IT groups only create systems to do things that are unique to their business and provide significant competitive advantage.But the idea of non-technical workers creating their own systems isn't just about packaged vs. self-written software. It generally arises from a perception that corporate systems don’t meet workers’ needs: either because the corporate systems are inadequate or because corporate IT is hard to work with and has other priorities. Faced with such obstacles to getting their jobs done, the more motivated and technically adept users will create their own systems, often working with tools like spreadsheets that aren’t really appropriate but have the unbeatable advantage of being available. Such user-built systems frequently grow to support work groups or even departments, especially at smaller companies. They’re much disliked by corporate IT, sometimes for turf protection but mostly because they pose very real dangers to security, compliance, reliability, and business continuity. Personal development on a platform like myEinstein poses many of the same risks, although the data within Salesforce is probably more secure than data held on someone’s personal computer or mobile phone.Oddly enough, marketing departments have been a little less prone to this sort of guerilla IT development than some other groups. The main reason is probably that modern marketing revolves around customer data and customer-facing systems, which are still managed by a corporate resource (not necessarily IT: could be Web development, marketing ops, or an outside vendor). In addition, the easy availability of Software as a Service packages has meant that even rogue marketers are using software built by professionals. (Although once you get beyond customer data to things like planning and budgeting, it’s spreadsheets all the way.)This is what makes the notion of systems like myEinstein so dangerous (and I don’t mean to pick on Salesforce in particular; I’m sure other vendors have similar ideas in development). Because those systems are directly tied into corporate databases, they remove the firewall that (mostly) separated customer data and processes from end-user developers. This opens up all sorts of opportunities for [...]



TrenDemon and Adinton Offer Attribution Options

2017-11-06T13:41:28.726-05:00

I wrote a couple weeks ago about the importance of attribution as a guide for artificial intelligence-driven marketing. One implication was I should pay more attention to attribution systems. Here’s a quick look at two products that tackle different parts of the attribution problem: content measurement and advertising measurement.TrenDemonLet’s start with TrenDemon. Its specialty is measuring the impact of marketing content on long B2B sales cycles. It does this by placing a tag on client Web sites to identify visitors and track the content they consume, and then connecting client CRM systems to find which visitor companies ultimately made a purchase (or reached some other user-specified goal). Visitors are identified by company using their IP address and as individuals by tracking cookies. TrenDemon does a bit more than correlate content consumption and final outcomes. It also identifies when each piece of content is consumed, distinguishing between the start, middle, and end of the buying journey. It also looks at other content metrics such as how many people read an item, how much time they spend with it, and how many read something else after they’re done. These and other inputs are combined to generate an attribution score for each item. The system uses the score to identify the most effective items for each journey stage and to recommend which items should be presented in the future. Pricing for TrenDemon starts at $800 per month. The system was launched in early 2015 and is currently used by just over 100 companies.Adinton Next we have Adinton, a Barcelona-based firm that specializes in attribution for paid search and social ads. Adinton has more than 55 clients throughout Europe, mostly selling travel and insurance online. Such purchases often involve multiple Web site visits but still have a shorter buying cycle than complex B2B transactions. Adinton has pixels to capture Web ad impressions as well as Web site visits. Like TrenDemon, it tracks site visitors over time and distinguishes between starting, middle, and finishing clicks. It also distinguishes between attributed and assisted conversions. When possible, it builds a unified picture of each visitor across devices and channels.The system uses this data to calculate the cost of different types of click types, which it combines to create a “true” cost per action for each ad purchase. It compares this with the clients’ target cost per actions to determine where they are over- or under-investing. Adinton has API connections to gather data from Google AdWords, Facebook Ads, Bing Ads, AdRoll, RocketFuel, and other advertising channels. An autobidding system can currently adjust bids in AdWords and will add Facebook and Bing adjustments in the near future. The system also does keyword research and click fraud identification. Pricing is based on number of clicks and starts as low as $299 per month for attribution analysis, with additional fees for autobidding and click fraud modules. Adinton was founded in 2013.  It launched its first product in 2014 although attribution came later.Further ThoughtsThese two products are chosen almost at random, so I wouldn’t assign any global significance to their features. But it’s still intriguing that both add a first/middle/last buying stage to the analysis. It’s also interesting that they occupy a middle ground between totally arbitrary attribution methodologies, such as first touch/last touch/fractional credit, and advanced algorithmic methods that attempt to calculate[...]



Flytxt Offers Broad and Deep Customer Management

2017-10-29T19:43:23.601-04:00

Some of the most impressive marketing systems I’ve seen have been developed for mobile phone marketing, especially for companies that sell prepaid phones.  I don’t know why: probably some combination of intense competition, easy switching when customers have no subscription, location as a clear indicator of varying needs, immediately measurable financial impact, and lack of legacy constraints in a new industry. Many of these systems have developed outside the United States, since  prepaid phones have a smaller market share here than elsewhere.Flytxt is a good example. Founded in India in 2008, its original clients were South Asian and African companies whose primary product was text messaging. The company has since expanded in all directions: it has clients in 50+ countries including South America and Europe plus a beachhead in the U.S.; its phone clients sell many more products than text; it has a smattering of clients in financial services and manufacturing; and it has corporate offices in Dubai and headquarters in the Netherlands. The product itself is equally sprawling. Its architecture spans what I usually call the data, decision, and delivery layers, although Flytxt uses different language. The foundation (data) layer includes data ingestion from batch and real-time sources with support for structured, semi-structured and unstructured data, data preparation including deterministic identity stitching, and a Hadoop-based data store. The intelligence (decision) layer provides rules, recommendations, visualization, packaged and custom analytics, and reporting. The application (delivery) layer supports inbound and outbound campaigns, a mobile app, and an ad server for clients who want to sell ads on their own Web sites. To be a little more precise, Flytxt’s application layer uses API connectors to send messages to actual delivery systems such as Web sites and email engines.  Most enterprises prefer this approach because they have sophisticated delivery systems in place and use them for other purposes beyond marketing messaging.And while we’re being precise: Flytxt isn’t a Customer Data Platform because it doesn’t give external systems direct access its unified customer data store.  But it does provide APIs to extract reports and selected data elements and can build custom connectors as needed. So it could probably pass as a CDP for most purposes.Given the breadth of Flytxt’s features, you might expect the individual features to be relatively shallow. Not so. The system has advanced capabilities throughout. Examples include anonymizing personally identifiable information before sharing customer data; multiple language versions attached to the one offer; rewards linked to offers; contact frequency limits by channel across all campaigns; rule- and machine learning-based recommendations; six standard predictive models plus tools to create custom models; automated control groups in outbound campaigns; real-time event-based program triggers; and a mobile app with customer support, account management, chat, personalization, and transaction capabilities. The roadmap is also impressive, including automated segment discovery and autonomous agents to find next best actions. What particularly caught my eye was Flytxt’s ability to integrate context with offer selection.  Real-time programs are connected to touchpoints such as Web site.  When a customer appears, Flytxtidentifies the customer, looks up her history and segment data, an[...]



When to Use a Proof of Concept in Marketing Software Selection -- And When Not

2017-10-22T16:55:51.230-04:00

“I used to hate POCs (Proof of Concepts) but now I love them,” a Customer Data Platform vendor told me recently. “We do POCs all the time,” another said when I raised the possibility on behalf of a client.Two comments could be a coincidence.  (Three make a Trend.)  But, as the first vendor indicated, POCs have traditionally been something vendors really disliked. So even the possibility that they’ve become more tolerable is worth exploring.We should start by defining the term.  Proof of Concept is a demonstration that something is possible. In technology in general, the POC is usually an experimental system that performs a critical function that had not previously been achieved.  A similar definition applies to software development. In the context of marketing systems, though, a POC is usually not so much an experiment as a partial implementation of an existing product.  What's being proven is the system's ability to execute key functions on the buyer's own data and/or systems. The distinction is subtle but important because it puts the focus on meeting the client's needs.   Of course, software buyers have always watched system demonstrations.  Savvy buyers have insisted that demonstrations execute scenarios based on their own business processes.  A carefully crafted set of scenarios can give a clear picture of how well a system does what the client wants.  Scenarios are especially instructive if the user can operate the system herself instead of just watching a salesperson.  What scenarios don’t illustrate is loading a buyer’s data into the system or the preparation needed to make that data usable. That’s where the POC comes in.The cost of loading client data was the reason most vendors disliked POCs. Back in the day, it required detailed analysis of the source data and hand-tuning of the transformation processes to put the data into the vendor’s database.  Today this is much easier because source systems are usually more accessible and marketing systems – at least if they’re Customer Data Platforms – have features that make transformation and mapping much more efficient. The ultimate example of easier data loads is the one-click connection between many marketing automation and CRM “platforms” and applications that are pre-integrated with those platforms. The simplicity is possible because the platforms and the apps are cloud-based, Software as a Service products.  This means there are no custom implementations or client-run systems to connect. Effortless connections let many vendors to offer free trials, since little or no vendor labor is involved in loading a client’s data.  In fact, free trials are problematic precisely because so little work goes into setting them up. Some buyers are diligent about testing their free trial system and get real value from the experience. But many set up a free trial and then don't use it, or use it briefly without putting in the effort to learn how the system works.  This means that all but the simplest products don’t get a meaningful test and users often underestimate the value of a system because they haven’t learned what it can do.POCs are not quite the same as free trials because they require more effort from the vendor to set up.  In return, most vendors will require a corresponding effort from the buyer to test the POC system.  On balance that’s a good thing since it ensures th[...]



Wizaly Offers a New Option for Algorithmic Attribution

2017-10-16T15:43:59.616-04:00

Wizaly is a relatively new entrant in the field of algorithmic revenue attribution – a function that will be essential for guiding artificial-intelligence-driven marketing of the future. Let’s take a look at what they do.First a bit of background: Wizaly is a spin-off of Paris-based performance marketing agency ESV Digital (formerly eSearchVision). The agency’s performance-based perspective meant it needed to optimize spend across the entire customer journey, not simply use first- or last-click attribution approaches which ignore intermediate steps on the path to purchase. Wizaly grew out of this need.Wizaly’s basic approach to attribution is to assemble a history of all messages seen by each customer, classify customers based on the channels they saw, compare results of customers whose experience differs by just one channel, and attribute any difference in results to that channel   For example, one group of customers might have seen messages in paid search, organic search, and social; another might have seen messages in those channels plus display retargeting. Any difference in performance would be attributed to display retargeting.This is a simplified description; Wizaly is also aware of other attributes such as the profiles of different customers, traffic sources, Web site engagement, location, browser type, etc. It apparently factors some or all of these into its analysis to ensure it is comparing performance of otherwise-similar customers. It definitely lets users analyze results based on these variables so they can form their own judgements.Wizaly gets its data primarily from pixels it places on ads and Web pages. These drop cookies to track customers over time and can track ads that are seen, even if they’re not clicked, as well as detailed Web site behaviors. The system can incorporate television through an integration with Realytics, which correlates Web traffic with when TV ads are shown. It can import ad costs and ingest offline purchases to use in measuring results. The system can stitch together customer identities using known identifiers. It can also do some probabilistic matching based on behaviors and connection data and will supplement this with data from third-party cross device matching specialists. Reports include detailed traffic analysis, based on the various attributes the system collects; estimates of the importance and effectiveness of each channel; and recommended media allocations to maximize the value from ad spending.  The system doesn't analyze the impact of message or channel sequence, compare the effectiveness of different messages, or estimate the impact of messages on long-term customer outcomes. As previously mentioned, it has a partial blindspot for mobile – a major concern, given how important mobile has become – and other gaps for offline channels and results. These are problems for most algorithmic attribution products, not just Wizaly. One definite advantage of Wizaly is price: at $5,000 to $15,000 per month, it is generally cheaper than better-known competitors. Pricing is based on traffic monitored and data stored. The company was spun off from ESV Digital in 2016 and currently has close to 50 clients worldwide.[...]



Attribution Will Be Critical for AI-Based Marketing Success

2017-10-08T07:31:32.279-04:00

I gave my presentation on Self-Driving Marketing Campaigns at the MarTech conference last week. Most of the content followed the arguments I made here a couple of weeks ago, about the challenges of coordinating multiple specialist AI systems. But prepping for the conference led me to refine my thoughts, so there are a couple of points I think are worth revisiting.The first is the distinction between replacing human specialists with AI specialists, and replacing human managers with AI managers. Visually, the first progression looks like this as AI gradually takes over specialized tasks in the marketing department:The insight here is that while each machine presumably does its job much better than the human it replaces,* the output of the team as a whole can’t fundamentally change because of the bottleneck created by the human manager overseeing the process. That is, work is still organized into campaigns that deal with customer segments because the human manager needs to think in those terms. It’s true that the segments will keep getting smaller, the content within each segment more personalized, and more tests will yield faster learning. But the human manager can only make a relatively small number of decisions about what the robots should do, and that puts severe limits on how complicated the marketing process can become.The really big change happens when that human manager herself is replaced by a robot:Now, the manager can also deal with more-or-less infinite complexity. This means we no longer need campaigns and segments and can truly orchestrate treatments for each customer as an individual. In theory, the robot manager could order her robot assistants to create custom messages and offers in each situation, based on the current context and past behaviors of the individual human involved. In essence, each customer has a personal robot following her around, figuring out what’s best for her alone, and then calling on the other robots to make it happen. Whether that's a paradise or nightmare is beyond the scope of this discussion.In my post a few weeks ago, I was very skeptical that manager robots would be able to coordinate the specialist systems any time soon.  That now strikes me as less of a barrier.  Among other reasons, I’ve seen vendors including Jivox and RevJet introduce systems that integrate large portions of the content creation and delivery workflows, potentially or actually coordinating the efforts of multiple AI agents within the process. I also had an interesting chat with the folks at Albert.ai, who have addressed some of the knottier problems about coordinating the entire campaign process. These vendors are still working with campaigns, not individual-level journey orchestration. But they are definitely showing progress.As I've become less concerned about the challenges of robot communication, I've grown more concerned about robots making the right decisions.  In other words, the manager robot needs a way to choose what the specialist robots will work on so they are doing the most productive tasks. The choices must be based on estimating the value of different options.  Creating such estimates is the job of revenue attribution.  So it turns out that accurate attribution is a critical requirement for AI-based orchestration.That’s an important insight.  All marketers acknowledge that attribution is important but most have focused the[...]



Customer Data Platforms Spread Their Wings

2018-01-13T09:46:44.831-05:00

I escaped from my cave this week to present at two conferences: the first-ever “Customer Data Platform Summit” hosted by AgilOne in Los Angeles, preceding Shop.org, and the Technology for Marketing conference in London, where BlueVenn sponsored me. I listened as much as could along the way to find what’s new with the vendors and their clients. There were some interesting developments. Broader awareness of CDP. The AgilOne event was invitation-only while the London presentation was open to any conference attendee, although BlueVenn did personally invite companies it wanted to attend. Both sets of listeners were already aware of CDPs, which isn’t something I’d expect to have seen a year or two ago. Both also had a reasonable notion of what a CDP does. But they still seemed to need help distinguishing CDPs from other types of systems, so we still have plenty more work to do in educating the market. Use of CDPs beyond marketing. People in both cities described CDPs being bought and used throughout client organizations, sometimes after marketing was the original purchaser and sometimes as a corporate project from the start. That was always a potential but it’s delightful to hear about it actually happening. The widely a CDP is used in a company, the more value the buyer gets – and the more benefit to the company’s customers. So hooray for that.CDPs in vertical markets. The AgilOne audience were all retailers, not surprisingly given AgilOne’s focus and the relation of the event to Shop.org. But I heard in London about CDPs in financial services, publishing, telecommunications, and several other industries where CDP hasn’t previously been used much. More evidence of the broader awareness and the widespread need for the solution that CDP provides.CDP for attribution. While in London I also stopped by the office of Fospha, another CDP vendor which has just become a Sponsor of the CDP Institute. They are unusual in having a focus on multi-touch attribution, something we’ve seen in a couple other CDPs but definitely less common than campaign management or personalization. That caught my attention because I just finished an analysis of artificial intelligence in journey orchestration, in which one major conclusion was that multi-touch attribution will be a key enabling technology. That needs a blog post of its own to explain, but the basic reason is AI needs attribution (specifically, estimating the incremental value of each marketing action) as a goal to optimize against when it's comparing investments in different marketing tasks  (content, media, segmentation, product, etc.)If there's a common thread here, it's that CDPs are spreading beyond their initial buyers and applications.  I’ll be presenting next week at yet another CDP-focused event, this one sponsored by BlueConic in advance of the Boston Martech Conference. Who knows what new things we'll see there?[...]