Subscribe: Customer Experience Matrix
http://customerexperiencematrix.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
based  cdp  cdps  companies  customer data  customer  data  it’s  marketers  marketing  much  new  system  systems  vendors 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Customer Experience Matrix

Customer Experience Matrix



This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is Principal at Raab Associates Inc. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a comp



Updated: 2018-02-21T01:03:37.391-05:00

 



How Customer Data Platforms Help with Marketing Performance Measurement

2018-02-19T20:48:10.743-05:00

John Wanamaker, patron saint of marketing measurement.If you’ve been following my slow progress towards a set of screening questions for Customer Data Platforms, you may recall that “incremental attribution” was on the list. The original reason was that some of the systems I first identified as CDPs offered incremental attribution as their primary focus. Attribution also seemed like a specific enough feature that it could be meaningfully distinguished from marketing measurement in general, which nearly any CDP could support to some degree.But as I gathered answers from the two dozen vendors who will be included the CDP Institute’s comparison report, I found that at best one or two provide the type of attribution I had in mind.  This wasn't enough to include in the screening list.  But there was an impressive variety of alternative answers to the question.  Those are worth a look.- Marketing mix models.  This is the attribution approach I originally intended to cover. It gathers all the marketing touches that reach a customer, including email messages, Web site views, display ad impressions, search marketing headlines, and whatever else can be captured and tied to an individual. Statistical algorithms then look at customers who had a similar set of contacts except for one item and attribute any difference in performance to that.  In practice, this is much more complicated than it sounds because the system needs to deal with different levels of detail and intelligently combine cases that lack enough data to treat separately.  The result is an estimate of the average value generated by incremental spending in each channel. These results are sometimes combined with estimates created using different techniques to cover channels that can’t be tied to individuals, such as broadcast TV. The estimates are used to find the optimal budget allocation across all channels, a.k.a. the marketing mix. - Next best action and bidding models.  These also estimate the impact of a specific marketing message on results, but work at the individual rather than channel levels. The system uses a history of marketing messages and results to predict the change in revenue (or other target behavior) that will result from sending a particular message to a particular individual. One typical use is deciding how much to bid for a display ad impression; another is to choose products or offers to make during an interaction. They differ from incremental attribution because they create separate predictions for each individual based on their history and the current context. Several CDP systems offer this type of analysis.  But it’s ultimately not different enough from other predictive analytics to treat it as a distinct specialty.- First/last/fractional touch.  These methods use the individual-level data about marketing contacts and results, but apply fixed rules to allocate credit.  They are usually limited to online advertising channels.  The simplest rules are to attribute all results to either the first or last interaction with a buyer.  Fractional methods divide the credit among several touches but use predefined rules to do the allocation rather than weights derived from actual data.  These methods are widely regarded as inadequate but are by far the most commonly used because alternatives are so much more difficult.  Several CDPs offer these methods.  - Campaign analysis. This looks at the impact of a particular marketing campaign on results. Again, the fundamental method is to compare performance of individuals who received a particular treatment with those who didn’t. But there’s usually more of an effort to ensure the treated and non-treated groups are comparable, either by setting up a/b test splits in advance or by analyzing results for different segments after the fact. The primary unit of analysis here is the campaign audience, not the specific individuals. The goal is usually to compare results for campaigns in the same channel, not to c[...]



Will GDPR Hurt Customer Data Platforms and the Marketers Who Use Them?

2018-02-18T14:51:44.727-05:00

Like an imminent hanging, the looming execution of the European Union’s General Data Protection Regulation (GDPR) has concentrated business leaders’ minds on their customer data. This has been a boon for Customer Data Platform vendors, who have been able to offer their systems as solutions to many GDPR requirements. But it raises some issues as well.First the good news: CDPs are genuinely well suited to help with GDPR. They’re built to solve two of GDPR’s toughest technical challenges: connecting all internal sources of customer data and linking all data related to the same person. In particular, CDPs focus on first party (i.e., company-owned) personally identifiable information and use deterministic matching to ensure accurate linkages. Those are exactly what GDPR needs. Some CDP vendors have added GDPR-specific features such as consent gathering, usage tracking, and data review portals. But those are relatively easy once you’ve assembled and linked the underlying data.GDPR is also good for CDPs in broader ways. Most obviously, it raises companies’ awareness of customer data management, which is the core CDP use case. It will also raise consumers' awareness of their data and their rights, which should lead to better quality customer information as consumers feel more confident that data they provide will be handled properly. (See this Accenture report that 75% of consumers are willing to share personal data if they can control how it’s used, or this PegaSystems survey in which 45% of EU consumers said they would erase their data from a company that sold or shared it with outsiders.)  Conversely, GDPR-induced constraints on acquiring external data should make a company’s own data that much more valuable. Collection requirements for GDPR should also make it easier for companies to tailor the degree of personalization to individual preferences.  This Adobe study found that 28% of consumers are not comfortable sharing any information with brands and 26% say that too-creepy personalization is their biggest annoyance with brand content. These results suggest there’s a segment of privacy-focused consumers who would value a privacy-centric marketing approach. (That this approach would itself require sophisticated personalization technology is an irony we marketers can quietly keep to ourselves.)So, what's not to like?  The downside to GDPR is that greater corporate interest in customer data means that marketers will not be left to manage it on their own.  Marketing departments have been the primary buyers of Customer Data Platforms because corporate IT often lacks the interest and skills needed to meet marketing needs.  GDPR and digital transformation don't give IT new resources but they do mean it will be more involved.  Indeed, this report from data governance vendor Erwin  found that responsibility for meeting data regulations is held by IT alone at 36% of companies and is shared between IT and all business units (not just marketing) at another 55%.  I’ve personally heard many recent stories about corporate IT buying CDPs. Selling to IT departments isn’t a problem for CDP vendors. Their existing technology should work with little change.  At most, they'll need to retool their sales and marketing. But marketers may suffer more. Corporate IT will have its own priorities and marketing won’t be at the top of the list. For example, this report from master data management vendor Semarchy found that customer experience, service and loyalty applications take priority over sales and marketing applications. More broadly, studies like this one from ComputerWorld consistently show that IT departments prioritize productivity, security and compliance over customer experience and analytics. Putting IT and legal departments in charge of customer data is likely to mean a more conservative approach to how it's used than marketers would apply on their own.  This may prevent some problems but it's also likely to mak[...]



Celebrus CDP Offers In-Memory Profiles

2018-02-02T19:24:27.511-05:00

It’s almost ten years to the day since I first wrote about Celebrus, which then called itself speed-trap (a term that presumably has fewer negative connotations in the U.K. than the U.S.). Back then, they were an easy-to-deploy Web site script that captured detailed visitor behaviors. Today, they gather data from all sources, map it to a client-tailored version of a 100+ table data model, and expose the results to analytics and customer engagement systems as in-memory profiles. Does that make them a Customer Data Platform? Well, Celebrus calls itself one – in fact, they were an early and enthusiastic adopter of the label. More important, they do what CDPs do: gather, unify, and share customer data. But Celebrus does differ in several ways from most CDP products:- in-memory data. When Celebrus described their product to me, it sounded like they don’t keep a persistent copy of the detailed data they ingest. But after further discussion, I found they really meant they don’t keep it within those in-memory profiles. They can actually store as much detail as the client chooses and query it to extract information that hasn't been kept in memory.  The queries can run in real time if needed. That’s no different from most other CDPs, which nearly always need to extract and reformat the detailed data to make it available. I’m not sure why Celebrus presents themselves this way; it might be that they have traditionally partnered with companies like Teradata and SAS that themselves provided the data store, or that they partnered with firms like Pega, Salesforce, and Adobe that positioned themselves as the primary repository, or simply to avoid ruffling feathers in IT departments that didn't want another data warehouse or data lake.  In any case, don’t let this confuse you: Celebrus can indeed store all your detailed customer data and will expose whatever parts you need.- standard data model. Many CDPs load source data without mapping it to a specific schema. This helps to reduce the time and cost of implementation. But mapping is needed later to extract the data in a usable form. In particular, any CDP needs to identify core bits of customer information such as name, address, and identifiers  that connect records related to the same person. Some CDPs do have elaborate data models, especially if they’re loading data from specific source systems or are tailored to a specific industry.  Celebrus does let users add custom fields and tables, so its standard data model doesn’t ultimately restrict what the system can store.- real-time access.  The in-memory profiles allow external systems to call Celebrus for real-time tasks such as Web site personalization or bidding on impressions..  Celebrus also loads, transforms, and exposes its inputs in real time.  It isn't the only CDP to do this, but it's one of just a few..Celebrus is also a bit outside the CDP mainstream in other ways. Their clients have been largely concentrated in financial services, while most CDPs have sold primarily to online and offline retailers. While most CDPs run as a cloud-based service, Celebrus supports cloud and on-premise deployments, which are preferred by many financial services companies.  Most CDPs are bought by marketing departments, but Celebrus is often purchased by customer experience, IT, analytics, and digital transformation teams and used for non-marketing applications such as fraud detection and system performance monitoring.Other Celebrus features are found in some but not most CDPs, so they’re worth noting if they happen to be on your wish list. These include ability to scan for events and issue alerts; handling of offline as well as online identity data; and specialized functions to comply with the European Union’s GDPR privacy rules. And Celebrus is fairly typical in limiting its focus to data assembly functions, without adding extensive analytics or customer engagement capabilities.  That's particular[...]



Collapse of Civilization Makes Marketers' Jobs Harder

2018-01-30T08:36:20.372-05:00

Political situations come and go but trust is the foundation of civilization itself. So I was genuinely shaken to see a report from the Edelman PR agency that trust in U.S. institutions fell last year by a huge margin – 17% for the general public and 34% for the “informed public,” placing us dead last among 27 countries. All four measured institutions (business, media, government, and non-governmental organizations) took similar hits, although government fell the most.You won’t be surprised to learn that concerns about fake news and social media are especially prominent. What’s less expected is that trust in traditional journalism actually increased in the U.S. The over-all decline in media trust resulted from falling confidence in news from search engines and social media. Similarly, world-wide trust increased in traditional authorities such as technical, academic, and business experts.  So there are rays of hope.Digging deeper, the sharp fall in U.S. trust levels follows two years when levels were exceptionally high. The current U.S. trust level is roughly the same as the four reports before that. Maybe you shouldn't head for that doomsday cabin quite yet. Still, other reports also show tremendous doubts about basic questions of truth. A Brand Intelligence study comparing brand attitudes of Democrats vs. Republicans found that eight of top 10 most polarizing brands were news outlets. World-wide, 59% of people told Edelman they were simply not sure what is true and just 36% felt the media were doing a good job of guarding information quality.The social implications of all this are sadly obvious.  But this blog is about marketing. How can marketers adapt and thrive in a trust-challenged, politically-polarized world?Protect privacy. Consumers can be remarkably cavalier in practice about protecting their data: this McAfee report found 41% don’t immediately change default passwords on new devices and 34% don’t limit access to their home network at all. But they are adamant that companies they do business with be more careful: Accenture studies have found that 92% of U.S. consumers feel it’s extremely important for companies to protect their personal information while 80% won’t do business with companies they don’t trust. Similarly, a Pega survey found that 45% of EU respondents said they would require companies to erase their data if they found it had been sold or shared with other companies. Personalize wisely. Accenture also found  that 44% of consumers are frustrated when companies don’t deliver relevant, personalized shopping experiences and 41% had switched companies due to lack of personalization or trust. So there’s clearly a price to be paid for not using the data you do collect. Similarly, an Oracle report found 50% of consumers would be attracted to offers based on personal data while just 29% would find them creepy.  In fact, consumers have a remarkably pragmatic attitude toward their data: 24[7] survey found their number one reason for sharing personal information is to receive discounts. This has two implications: ask consumers whether they want personalized messages (or any messages), and be sure the value of your personalization outweighs its inherent creepiness. Use trusted media. Consumers’ attitudes towards media in general, and social media in particular, are complicated. We’ve already noted that Edelman found growing distrust in online platforms. Other studies by Kantar and Sharethrough found the same. But consumers still spend most of their time on search engines and social media, which GlobalWebindex found remain by far the top research channels. Yet when it comes to building awareness, a different GlobalWebIndex report found that social ranked far behind search engines, TV, and display ads. Further muddying the waters, social and ecommerce companies (Facebook, Amazon, and eBay) topped NetBase’s list of most loved brands while Google r[...]



Simple Questions to Screen Customer Data Platform Vendors

2018-01-24T16:10:09.996-05:00

I’ve been working for months to find a way to help marketers understand the differences between Customer Data Platform vendors. After several trial balloons and with considerable help from industry friends, I recently published a set of criteria that I think will do the job. You can see the full explanation on the CDP Institute blog. But, since this blog has its own readership I figured I’d post the basics here as well.The primary goal is give marketers a relatively easy way to decide which CDPs are likely to meet their needs. To do this I’ve come up with a a small list of features that relate directly to working with particular data sources and supporting particular applications. The theory is that marketers know what sources and applications they need to support, even if they're not experts in the fine points of CDP technology.In other words, read these items as meaning: if you want your CDP to support [this data type or application] then it should have [this feature].Obviously this list covers just a tiny fraction of all possible CDP features. It’s up to marketers to dig into the details of each system to determine how well it supports their specific needs.  We have detailed lists of CDP features in the Evaluation section of the CDP Institute Library.The final list also includes a few features that are present in all CDPs (or, more precisely, in all systems that I consider a CDP – we can’t control what vendors say about themselves). These are presented since there’s still some confusion about how CDPs differ from other types of systems. Now that the list is set, the next step is to research which features are actually present in which vendors and publish the results. That will take a while but when it’s done I’ll certainly announce it here.Here’s the list:Shared CDP Features: Every CDP does all of these. Non-CDPs may or may not. Retain original detail. The system stores data with all the detail provided when it was loaded. This means all details associated with purchase transactions, promotion history, Web browsing logs, changes to personal data, etc. Inputs might be physically reformatted when they’re loaded into the CDP but can be reconstructed if needed.Persistent data. The system retains the input data as long as the customer chooses. (This is implied by the previous item but is listed separately to simplify comparison with non-CDP systems.)Individual detail. The system can access all detailed data associated with each person. (This is also implied by the first item but is a critical difference from systems that only store and access segment tags on customer records.)Vendor-neutral access. All stored data can be exposed to any external system, not only components of the vendor’s own suite. Exposing particular items might require some set-up and access is not necessarily a real time query. Manage Personally Identifiable Information (PII). The system manages Personally Identifiable Information such as name, address, email, and phone number. PII is subject to privacy and security regulations that vary based on data type, location, permissions, and other factors. Differentiating CDP Features: A CDP doesn’t have to do any of these although many do some and some do many. These are divided into three subclasses: data management, analytics, and customer engagement.Data Management. These are features that gather, assemble, and expose the CDP data.     Base Features. These apply to all types of data. API/query access. External systems can access CDP data via an API or standard query language such as SQL. It’s just barely acceptable for a CDP to not offer this function and instead provide access through data extracts. But API or query access is much preferred and usually available. API or query access often requires some intermediate configuration, reformatting, or indexing to expose items within the CDP’s primarily data store. Those a[...]



The Light Bulbs Have Ears: Why Listening Is Voice-Activated Devices' Most Important Skill

2018-01-15T16:26:13.666-05:00

If one picture’s worth a thousand words, why is everyone rushing to replace graphical interfaces with voice-activated systems?The question has an answer, which we’ll get to below. But even though the phrasing is a bit silly, it truly is worth asking. Anyone who’s ever tried to give written driving directions and quickly switched to drawing a map knows how hard it is to accurately describe any process in words. That’s why research like this study from Invoca shows consumers only want to engage with chatbots on simple tasks and quickly revert to speaking to a human for anything complicated. And it’s why human customer support agents are increasingly equipped with screen sharing tools that let them see what their customer is seeing instead of just talking about it.Or to put it another way: imagine a voice-activated car that uses spoken commands to replace the steering wheel, gear shifter, gas and brake pedals. It’s a strong candidate for Worst Idea Ever. Speaking the required movements is much harder than making them movements directly. By contrast, the idea of a self-driving car is hugely appealing. That car would also be voice-activated, in the sense that you get in and tell it where to go. The difference between the two scenarios isn’t vocal instructions or even the ability of the system to engage in a human-like conversation. Some people might like their car to engage in witty banter, but those with human friends would probably rather talk with them by phone or spend their ride quietly. A brisk “yes, ma'am” and confirmation that the car understood the instructions correctly should usually suffice.What makes the self-driving car appealing isn’t that it can listen or speak, but that it can act autonomously. And what makes that autonomy possible is situational awareness – the car's ability to understand its surrounding environment, including its occupant’s intentions, and to respond appropriately. The same is ultimately true of other voice-activated devices. If Alexa and her cousins could only do exactly what we told them, they’d be useful in limited situations – say, to turn on the kitchen lights when your hands are full of groceries. But their exciting potential is to do much more complicated things on their own, like ordering those groceries in the first place (and, eventually, coordinating with other devices to receive the grocery delivery, put the groceries in the right cabinets, prepare a delicious dinner, and clean the dishes). This autonomy only happens if the devices really understand what we want and how to make it happen. Some of that understanding comes from artificial intelligence but the real limit is the what data the AI has available to process. So I’d argue that the most important skill of the voice-activated devices is really listening.  That’s how they collect the data they need to act appropriately. And the larger vision is for all these devices to pool the information they gather, allowing each device to do a better job by itself and in cooperation with the others. Whether you want to live in a world where the walls, cars, refrigerators, thermostats, doorknobs, and light bulbs all have ears is debatable. But that’s where we’re headed, barring some improbable-but-not-impossible Black Swan event that changes everything. (Like, say, a devastating security flaw in nearly every microprocessor on the planet that goes undetected for years…wait, that just happened.) Still, in the context of this blog, what really matters is how it all affects marketers. From that perspective, voice interfaces are highly problematic because they make advertising much harder: instead of passively lurking in the corners of a computer screen, appearing alongside search results,  larded into social media feeds, or popping up unbidden during TV shows, voice ads are either front-and-center or nowhere. Chances are consume[...]



What's Next for Customer Data Platforms? New Report Offers Some Clues.

2018-01-03T08:47:12.461-05:00

The Customer Data Platform Institute released its semi-annual Industry Update today. (Download it here).  It’s the third edition of this report, which means we now can look at trends over time. The two dozen vendors in the original report have grown about 25% when measured by employee counts in LinkedIn, which is certainly healthy although not the sort of hyper growth expected from an early stage industry. On the other hand, the report has added two dozen more vendors, which means the measured industry size has doubled. Total employee counts have doubled too. Since many of the new vendors were outside the U.S., LinkedIn probably misses a good portion of their employees, meaning actual growth was higher still.The tricky thing about this report is that the added vendors aren’t necessarily new companies. Only half were founded in 2014 or later, which might mean they’ve just launched their products after several years of development. The rest are older. Some of these have always been CDPs but just recently came to our attention. This is especially true of companies from outside the U.S. But most of the older firms started as something else and reinvented themselves as CDPs, either through product enhancements or simply by adopting the CDP label. Ultimately it’s up to the report author (that would be me) to decide which firms qualify for inclusion.   I’ve done my best to list only products that actually meet the CDP definition.*   But I do  give the benefit of the doubt to companies that adopted the label. After all, there’s some value in letting the market itself decide what’s included in the category.What’s most striking about the newly-listed firms is they are much more weighted towards customer engagement systems than the original set of vendors. Of the original two dozen vendors, eleven focused primarily on building the CDP database, while another six combined database building with analytics such as attribution or segmentation. Only the remaining seven offered customer engagement functions such as personalization, message selection, or campaign management. That’s 29%.** By contrast, 18 of the 28 added vendors offer customer engagement – that’s 64%. It’s a huge switch. The added firms aren’t noticeably younger than the original vendors, so this doesn’t mean there’s a new generation of engagement-oriented CDPs crowding out older, data-oriented systems. But it does mean that more engagement-oriented firms are identifying themselves as CDPs and adding CDP features as needed to support their positioning. So I think we can legitimately view this as validation that CDPs offer something that marketers recognize they need. What we don’t know is whether engagement-oriented CDPs will ultimately come to dominate the industry. Certainly they occupy a growing share. But the data- and analysis-oriented firms still account for more than half of the listed vendors (52%) and even higher proportions of employees (57%), new funding (61%) and total funding (74%).  So it’s far from clear that the majority of marketers will pick a CDP that includes engagement functions. So far, my general observation has been that engagement-oriented CDPs appeal more to mid-size firms while data and analysis oriented CDPs appeal most to large enterprises. I think the reason is that large enterprises already have good engagement systems or prefer to buy such systems separately. Smaller firms are more likely to want to replace their engagement systems at the same time they add a CDP and want to tie the CDP directly to profit-generating engagement functions. Smaller firms are also more sensitive to integration costs, although those should be fairly small when CDPs are concerned. There’s nothing in the report to support or refute this view, since it doesn’t tell us anything about the numbers or sizes[...]



Surprising Results in Customer Data Platform Survey

2017-12-19T21:52:09.594-05:00

The Customer Data Platform Institute (CDPI) recently surveyed its members about their customer-facing systems and CDP deployment plans.  (Click here to download the full report.)  While CDP Institute members are obviously not typical marketers (being smarter, richer, and better looking), the answers still provide some intriguing insights into the marketplace. Let’s start with the general state of customer facing systems. One-third reported they had many disconnected systems, just over one-third reported (37%) they had many systems connected to a central platform of some sort (9% unified customer database, 9% unified database and orchestration system, or 19% marketing automation or CRM platform), 6% said one system does almost everything, and the remaining 23% said they had some other configuration or didn’t know. I’ve compared these results below with several other surveys that asked similar questions. The one thing that immediately jumps out in this comparison is the CDPI Member survey showed a much lower percentage of replies for “one primary system”. Otherwise, the answers across all surveys are very roughly similar, showing about 30% to 50% of companies having many connected systems and many disconnected systems. This suggests more integration than I’d expect, but it depends on how much integration those “many connected” systems really represent. The CDPI Member survey also asked about plans regarding Customer Data Platforms. Nineteen percent had a CDP already deployed, 18% had deployment in progress, and 26% planned to start deployment within the next 12 months. The remaining 38% either planned to start after 12 months (4%), had no plans to deploy (19%) or didn’t know (14%). I haven’t seen any other survey that asks this question but have no doubt that CDP deployment and plans are much higher for CDPI members than the rest of the industry average.Where things get really interesting is when we explore how the same people answered these two questions. At first blush, you’d assume the 19% with a deployed CDP would be the same 18% who said they had many systems connected to a unified customer platform, either by itself or with an orchestration engine attached. Not so much. Here’s the actual cross tab of the results.What you see (in the yellow cells) is just 42% of the people who said they had a deployed CDP also said they had many systems connected to a unified database. If we allow that a deployed CDP could be present in companies where one customer-facing system does almost everything or the customer facing systems are connected to marketing automation or CRM, then 74% of the deployed CDPs are covered.I take the remaining 26% as a healthy reminder that just having a CDP doesn’t guarantee all your systems will connect to that CDP, either by feeding data into it or reading data from it. In fact, we know that many CDPs support analytics without being connected to delivery systems, so this really shouldn’t come as a surprise.On a more encouraging note, as the green cells highlight, a good majority of in-process and planned CDP deployments are at companies with many disconnected systems or many systems connected to a marketing automation or CRM.  Those are the companies most in need of data unification. So it does appear that the CDP message is reaching its target audience and CDPs are being used as intended.The survey also asked about company revenue, business type (B2B vs B2C), and region. Comparing those with current systems and CDP deployment also gave some interesting and unexpected results. But there's no point in repeating them here since you can download the full report and see for yourself.  Enjoy![...]



Here's a Game to Illustrate Strategic Planning

2017-12-06T23:14:54.242-05:00

My wife is working on a Ph.D. in education and recently took a course on strategic planning for academic institutions. Her final project included creating a game to help illustrate the course lessons. What she came up with struck me as applicable to planning in all industries, so I thought I’d share it here.The fundamental challenge she faced in designing the game was to communicate key concepts about strategic planning. The main message was that strategic planning is about choosing among different strategies to find the one that best matches available resources. That’s pretty abstract, so she made it concrete by presenting game players with a collection of alternative strategies, each on a card of its own. She then created a second set of cards that listed actions available to the players. Each action card showed which strategies the action supported and what resources it required. There were four resources: money, faculty, students, and administrative staff.  To keep things simple, she assumed that total resources were fixed, that each strategy contributed equally to the ultimate goal, and that each action contributed equally to whichever strategies it supported.  In other words, the components of the game were:- One goal. In the case of my wife’s game, the goal was to achieve a “top ten” ranking for a particular department within a university. (It was a good goal because it was easily understood and measured.)- Four strategies. In my wife’s game, the options were to build up the department, cooperate with other departments at the university, cooperate with other universities, or promote the department to the media and general public. - A dozen actions. Each action supported at least one strategy (scored with 1 or 0) and consumed some quantity of the four resources (scored from 0 to 3 for each resource). Actions were things like “run a conference”, “set up a cross-disciplinary course” and “finance faculty research”.- Four resources, each assigned an available quantity (i.e., budget).As you can tell from the description, the action cards are the central feature of the game.  Here's a concrete example, where each row represents one action card:The fundamental game mechanism was to pick a set of actions.  These were scored by counting how how many supported each strategy and how many resources they consumed.  The resource totals couldn't exceed the available quantities for each resource.  The table below shows scoring for a set of three actions.  In this particular example, all three actions support "cooperate with other departments", while two support "build department" and one each supports "cooperate with other universities" and "promote to public".  Resource needs were money=8, faculty=6, student=5 and administration= 1.  Someone with these cards could choose "cooperate with other departments" as the best strategy -- if the resources permitted.  But if they were limited to 7 points for each resource, they might switch the "fund scholarship" card for the "extracurricular enrichment" card, which uses less money even though it consumes more of the other resources.  That works because, with a budget of 7 for each resource, the player can afford to increase spending in the other categories.As this example suggests, the goal of the game is to get players to think about the relations among strategies, actions and resources, and in particular how to choose actions that fit with strategies and resources.Although the basic scoring approach is built into the game, there are many ways my wife could have played it: - Predefine available resources and let different players draw different action cards.  They would then decide which strategy best fit the available cards and resources.  - Give different strategy cards t[...]



2017 Retrospective: Things I Didn't Predict

2017-12-02T10:32:32.131-05:00

It’s the time of year when people make predictions. It’s not my favorite exercise: the best prediction is always that things will continue as they are, but what’s really interesting is change – and significant change is inherently unpredictable. (See Nassim Nicholas Taleb's The Black Swan  and Philip Tetlocks' Superforecasting on those topics.)So I think instead I’ll take a look at surprising changes that already happened. I’ve covered many of these in the daily newsletter of the Customer Data Platform Institute (click here to subscribe for free). In no particular order, things I didn’t quite expect this year include:- Pushback against the walled garden vendors (Facebook, Google, Amazon, Apple, etc.) Those firms continue to dominate life online, and advertising and ecommerce in particular. (Did you know that Amazon accounted for more than half of all Black Friday sales last week?) But the usual whining about their power from competitors and ad buyers has recently been joined by increasing concerns among the public, media, and government. What’s most surprising is it took so long for the government to recognize the power those companies have accrued and the very real threat they pose to governmental authority. (See Martin Gurri’s The Revolt of the Public for by far the best explanation I’ve seen of how the Internet affects politics.)  On the other hand, the concentrated power of the Web giants means they could easily converted into agents of control if the government took over.  Don’t think this hasn’t occurred to certain (perhaps most) people in Washington.  Perhaps that’s why they’re not interested in breaking them up.  Consistent with this thought: the FCC plan to end Net Neutrality will give much more power to cable companies, which as highly regulated utilities have a long history of working closely with government authorities. It’s pitifully easy to imagine the cable companies as enthusiastic censors of unapproved messages.- Growth in alternative personal data sources. Daily press announcements include a constant stream of news from companies that have found some new way to accumulate data about where people are going, who they meet, what they’re buying, what they plan to buy, what content they’re consuming, and pretty much everything else. Location data is especially common, derived from mobile apps that most people surely don’t realize are tracking them. But I’ve seen other creative approaches such as scanning purchase receipts (in return for a small financial reward, of course) and even using satellite photos to track store foot traffic. In-store technology such as beacons and wifi track behaviors even more precisely, and I’ve seen some fascinating (and frightening) claims about visual technologies that capture peoples’ emotions as well as identities. Combine those technologies with ubiquitous high resolution cameras, both mounted on walls and built into mobile devices, and the potential to know exactly who does and thinks what is all too real. Cross-device matching and cross-channel identity matching (a.k.a. “onboarding”) are part of this too.- Growth in voice interfaces. Voice interfaces don't have the grand social implications of the preceding items but it’s still worth noting that voice-activated devices (Amazon Alexa and friends) and interfaces (Siri, Cortana, etc.) have grown more quickly than I anticipated. The change does add new challenges for marketers who were already having a hard time figuring out where to put ads on a mobile phone screen.  With voice, they have no screen at all.  Having your phone read ads to you, or perhaps worse sing a catchy jingle, will be pretty annoying. To take a more positive view: voice interfaces will force innovation in how m[...]



Do Customer Data Platforms Need Identity Matching? The Answer May Surprise You.

2017-11-22T14:38:48.338-05:00

I spend a lot of time with vendors trying to decide whether they are, or should be, a Customer Data Platform. I also spend a lot of time with marketers trying to decide which CDPs might be right for them. One topic that’s common in both discussions is whether a CDP needs to include identity resolution – that is, the ability to decide which identifiers (name/address, phone number, email, cookie ID, etc.) belong to the same person.It seems like an odd question. After all, the core purpose of a CDP is to build a unified customer database, which requires connecting those identifiers so data about each customer can be brought together. So surely identity resolution is required.Turns out, not so much. There are actually several reasons.- Some marketers don’t need it. Companies that deal only in a single channel often have just one identifier per customer.  For example, Web-only companies might use just a cookie ID.  True, channel-specific identifiers sometimes change (e.g., cookies get deleted).  But there may be no practical way to link old and new identifiers when that happens, or marketers may simply not care.  A more common situation is companies have already built an identity resolution process, often because they’re dealing with customers who identify themselves by logging in or who transact through accounts. Financial institutions, for example, often know exactly who they’re dealing with because all transactions are associated with an account that’s linked to a customer's master record (or perhaps not linked because the customer prefers it that way). Even when identity resolution is complicated,  mature companies often (well, sometimes) have mature processes to apply a customer ID to all data before it reaches the CDP. In any of these cases, the CDP can use the ID it’s given and not need an identity resolution process of its own.- Some marketers can only use it if it’s perfect. Again, think of a financial institution: it can’t afford to guess who’s trying to take money out of an account, so it requires the customer to identify herself before making a transaction. In many other circumstances, absolute certainty isn’t required but a false association could be embarrassing or annoying enough that the company isn’t willing to risk it. In those cases, all that’s needed is an ability to “stitch” together identifiers based on definite connections. That might mean two devices are linked because they both sent emails using the same email address, or an email and phone number linked because someone entered them both into a registration form. Almost every CDP has this sort of “deterministic” linking capability, which is so straightforward that it barely counts as identity resolution in the broader sense.- Specialized software already exists. The main type of matching that CDPs do internally – beyond simple stitching – is “fuzzy” matching.  This applies rules to decide when two similar-looking records really refer to the same person. It's most commonly applied to names and postal addresses, which are often captured inconsistently from one source to the next. It might sometimes be applied to other types of data, such as different forms of an email address (e.g. draab@raabassociates.com and draab@raabassociatesinc.com). The technology for this sort of matching gets very complicated very quickly, and it’s something that specialized vendors offer either for purchase or as a service. So CDP vendors can quite reasonably argue they needn’t build this for themselves but should simply integrate an external product.- Much identity resolution requires external data. This is the heart of the matter.  Most of the really interesting identity resolution today involves linkin[...]



No, Users Shouldn't Write Their Own Software

2017-11-10T11:58:03.654-05:00

Salesforce this week announced “myEinstein” self-service artificial intelligence features to let non-technical users build predictive models and chatbots. My immediate reaction was that's a bad idea: top-of-the-head objections include duplicated effort, wasted time, and the potential for really bad results. I'm sure I could find other concerns if I thought about it, but today’s world brings a constant stream of new things to worry about, so I didn’t bother. But then today’s news described an “Everyone Can Code” initiative from Apple, which raised essentially the same issue in even clearer terms: should people create their own software?I thought this idea had died a well-deserved death decades ago. There was a brief period when people thought that “computer literacy” would join reading, writing, and arithmetic as basic skills required for modern life. But soon they realized that you can run a computer using software someone else wrote!* That made the idea of everyone writing their own programs seem obviously foolish – specifically because of duplicated effort, wasted time, and the potential for really bad results. It took IT departments much longer to come around the notion of buying packaged software instead of writing their own but even that battle has now mostly been won. Today, smart IT groups only create systems to do things that are unique to their business and provide significant competitive advantage.But the idea of non-technical workers creating their own systems isn't just about packaged vs. self-written software. It generally arises from a perception that corporate systems don’t meet workers’ needs: either because the corporate systems are inadequate or because corporate IT is hard to work with and has other priorities. Faced with such obstacles to getting their jobs done, the more motivated and technically adept users will create their own systems, often working with tools like spreadsheets that aren’t really appropriate but have the unbeatable advantage of being available. Such user-built systems frequently grow to support work groups or even departments, especially at smaller companies. They’re much disliked by corporate IT, sometimes for turf protection but mostly because they pose very real dangers to security, compliance, reliability, and business continuity. Personal development on a platform like myEinstein poses many of the same risks, although the data within Salesforce is probably more secure than data held on someone’s personal computer or mobile phone.Oddly enough, marketing departments have been a little less prone to this sort of guerilla IT development than some other groups. The main reason is probably that modern marketing revolves around customer data and customer-facing systems, which are still managed by a corporate resource (not necessarily IT: could be Web development, marketing ops, or an outside vendor). In addition, the easy availability of Software as a Service packages has meant that even rogue marketers are using software built by professionals. (Although once you get beyond customer data to things like planning and budgeting, it’s spreadsheets all the way.)This is what makes the notion of systems like myEinstein so dangerous (and I don’t mean to pick on Salesforce in particular; I’m sure other vendors have similar ideas in development). Because those systems are directly tied into corporate databases, they remove the firewall that (mostly) separated customer data and processes from end-user developers. This opens up all sorts of opportunities for well-intentioned workers to cause damage.But let’s assume there are enough guardrails in place to avoid the obvious security and customer treatment risks. Personal systems have a more fund[...]



TrenDemon and Adinton Offer Attribution Options

2017-11-06T13:41:28.726-05:00

I wrote a couple weeks ago about the importance of attribution as a guide for artificial intelligence-driven marketing. One implication was I should pay more attention to attribution systems. Here’s a quick look at two products that tackle different parts of the attribution problem: content measurement and advertising measurement.TrenDemonLet’s start with TrenDemon. Its specialty is measuring the impact of marketing content on long B2B sales cycles. It does this by placing a tag on client Web sites to identify visitors and track the content they consume, and then connecting client CRM systems to find which visitor companies ultimately made a purchase (or reached some other user-specified goal). Visitors are identified by company using their IP address and as individuals by tracking cookies. TrenDemon does a bit more than correlate content consumption and final outcomes. It also identifies when each piece of content is consumed, distinguishing between the start, middle, and end of the buying journey. It also looks at other content metrics such as how many people read an item, how much time they spend with it, and how many read something else after they’re done. These and other inputs are combined to generate an attribution score for each item. The system uses the score to identify the most effective items for each journey stage and to recommend which items should be presented in the future. Pricing for TrenDemon starts at $800 per month. The system was launched in early 2015 and is currently used by just over 100 companies.Adinton Next we have Adinton, a Barcelona-based firm that specializes in attribution for paid search and social ads. Adinton has more than 55 clients throughout Europe, mostly selling travel and insurance online. Such purchases often involve multiple Web site visits but still have a shorter buying cycle than complex B2B transactions. Adinton has pixels to capture Web ad impressions as well as Web site visits. Like TrenDemon, it tracks site visitors over time and distinguishes between starting, middle, and finishing clicks. It also distinguishes between attributed and assisted conversions. When possible, it builds a unified picture of each visitor across devices and channels.The system uses this data to calculate the cost of different types of click types, which it combines to create a “true” cost per action for each ad purchase. It compares this with the clients’ target cost per actions to determine where they are over- or under-investing. Adinton has API connections to gather data from Google AdWords, Facebook Ads, Bing Ads, AdRoll, RocketFuel, and other advertising channels. An autobidding system can currently adjust bids in AdWords and will add Facebook and Bing adjustments in the near future. The system also does keyword research and click fraud identification. Pricing is based on number of clicks and starts as low as $299 per month for attribution analysis, with additional fees for autobidding and click fraud modules. Adinton was founded in 2013.  It launched its first product in 2014 although attribution came later.Further ThoughtsThese two products are chosen almost at random, so I wouldn’t assign any global significance to their features. But it’s still intriguing that both add a first/middle/last buying stage to the analysis. It’s also interesting that they occupy a middle ground between totally arbitrary attribution methodologies, such as first touch/last touch/fractional credit, and advanced algorithmic methods that attempt to calculate the true incremental impact of each touch. (Note that neither TrenDemon nor Adinton’s summary metric is presented as estimating incremental value.)  Of course, without true incrementa[...]



Flytxt Offers Broad and Deep Customer Management

2017-10-29T19:43:23.601-04:00

Some of the most impressive marketing systems I’ve seen have been developed for mobile phone marketing, especially for companies that sell prepaid phones.  I don’t know why: probably some combination of intense competition, easy switching when customers have no subscription, location as a clear indicator of varying needs, immediately measurable financial impact, and lack of legacy constraints in a new industry. Many of these systems have developed outside the United States, since  prepaid phones have a smaller market share here than elsewhere.Flytxt is a good example. Founded in India in 2008, its original clients were South Asian and African companies whose primary product was text messaging. The company has since expanded in all directions: it has clients in 50+ countries including South America and Europe plus a beachhead in the U.S.; its phone clients sell many more products than text; it has a smattering of clients in financial services and manufacturing; and it has corporate offices in Dubai and headquarters in the Netherlands. The product itself is equally sprawling. Its architecture spans what I usually call the data, decision, and delivery layers, although Flytxt uses different language. The foundation (data) layer includes data ingestion from batch and real-time sources with support for structured, semi-structured and unstructured data, data preparation including deterministic identity stitching, and a Hadoop-based data store. The intelligence (decision) layer provides rules, recommendations, visualization, packaged and custom analytics, and reporting. The application (delivery) layer supports inbound and outbound campaigns, a mobile app, and an ad server for clients who want to sell ads on their own Web sites. To be a little more precise, Flytxt’s application layer uses API connectors to send messages to actual delivery systems such as Web sites and email engines.  Most enterprises prefer this approach because they have sophisticated delivery systems in place and use them for other purposes beyond marketing messaging.And while we’re being precise: Flytxt isn’t a Customer Data Platform because it doesn’t give external systems direct access its unified customer data store.  But it does provide APIs to extract reports and selected data elements and can build custom connectors as needed. So it could probably pass as a CDP for most purposes.Given the breadth of Flytxt’s features, you might expect the individual features to be relatively shallow. Not so. The system has advanced capabilities throughout. Examples include anonymizing personally identifiable information before sharing customer data; multiple language versions attached to the one offer; rewards linked to offers; contact frequency limits by channel across all campaigns; rule- and machine learning-based recommendations; six standard predictive models plus tools to create custom models; automated control groups in outbound campaigns; real-time event-based program triggers; and a mobile app with customer support, account management, chat, personalization, and transaction capabilities. The roadmap is also impressive, including automated segment discovery and autonomous agents to find next best actions. What particularly caught my eye was Flytxt’s ability to integrate context with offer selection.  Real-time programs are connected to touchpoints such as Web site.  When a customer appears, Flytxtidentifies the customer, looks up her history and segment data, and infers intent from the current behavior and context (such as location), and returns the appropriate offer for the current situation. The offer and message can be further personalized based o[...]



When to Use a Proof of Concept in Marketing Software Selection -- And When Not

2017-10-22T16:55:51.230-04:00

“I used to hate POCs (Proof of Concepts) but now I love them,” a Customer Data Platform vendor told me recently. “We do POCs all the time,” another said when I raised the possibility on behalf of a client.Two comments could be a coincidence.  (Three make a Trend.)  But, as the first vendor indicated, POCs have traditionally been something vendors really disliked. So even the possibility that they’ve become more tolerable is worth exploring.We should start by defining the term.  Proof of Concept is a demonstration that something is possible. In technology in general, the POC is usually an experimental system that performs a critical function that had not previously been achieved.  A similar definition applies to software development. In the context of marketing systems, though, a POC is usually not so much an experiment as a partial implementation of an existing product.  What's being proven is the system's ability to execute key functions on the buyer's own data and/or systems. The distinction is subtle but important because it puts the focus on meeting the client's needs.   Of course, software buyers have always watched system demonstrations.  Savvy buyers have insisted that demonstrations execute scenarios based on their own business processes.  A carefully crafted set of scenarios can give a clear picture of how well a system does what the client wants.  Scenarios are especially instructive if the user can operate the system herself instead of just watching a salesperson.  What scenarios don’t illustrate is loading a buyer’s data into the system or the preparation needed to make that data usable. That’s where the POC comes in.The cost of loading client data was the reason most vendors disliked POCs. Back in the day, it required detailed analysis of the source data and hand-tuning of the transformation processes to put the data into the vendor’s database.  Today this is much easier because source systems are usually more accessible and marketing systems – at least if they’re Customer Data Platforms – have features that make transformation and mapping much more efficient. The ultimate example of easier data loads is the one-click connection between many marketing automation and CRM “platforms” and applications that are pre-integrated with those platforms. The simplicity is possible because the platforms and the apps are cloud-based, Software as a Service products.  This means there are no custom implementations or client-run systems to connect. Effortless connections let many vendors to offer free trials, since little or no vendor labor is involved in loading a client’s data.  In fact, free trials are problematic precisely because so little work goes into setting them up. Some buyers are diligent about testing their free trial system and get real value from the experience. But many set up a free trial and then don't use it, or use it briefly without putting in the effort to learn how the system works.  This means that all but the simplest products don’t get a meaningful test and users often underestimate the value of a system because they haven’t learned what it can do.POCs are not quite the same as free trials because they require more effort from the vendor to set up.  In return, most vendors will require a corresponding effort from the buyer to test the POC system.  On balance that’s a good thing since it ensures that both parties will learn from the project.Should a POC be part of every vendor selection process? Not at all.  POCs answer some important questions, including how easily the vendor c[...]



Wizaly Offers a New Option for Algorithmic Attribution

2017-10-16T15:43:59.616-04:00

Wizaly is a relatively new entrant in the field of algorithmic revenue attribution – a function that will be essential for guiding artificial-intelligence-driven marketing of the future. Let’s take a look at what they do.

First a bit of background: Wizaly is a spin-off of Paris-based performance marketing agency ESV Digital (formerly eSearchVision). The agency’s performance-based perspective meant it needed to optimize spend across the entire customer journey, not simply use first- or last-click attribution approaches which ignore intermediate steps on the path to purchase. Wizaly grew out of this need.

Wizaly’s basic approach to attribution is to assemble a history of all messages seen by each customer, classify customers based on the channels they saw, compare results of customers whose experience differs by just one channel, and attribute any difference in results to that channel   For example, one group of customers might have seen messages in paid search, organic search, and social; another might have seen messages in those channels plus display retargeting. Any difference in performance would be attributed to display retargeting.

This is a simplified description; Wizaly is also aware of other attributes such as the profiles of different customers, traffic sources, Web site engagement, location, browser type, etc. It apparently factors some or all of these into its analysis to ensure it is comparing performance of otherwise-similar customers. It definitely lets users analyze results based on these variables so they can form their own judgements.

Wizaly gets its data primarily from pixels it places on ads and Web pages. These drop cookies to track customers over time and can track ads that are seen, even if they’re not clicked, as well as detailed Web site behaviors. The system can incorporate television through an integration with Realytics, which correlates Web traffic with when TV ads are shown. It can import ad costs and ingest offline purchases to use in measuring results. The system can stitch together customer identities using known identifiers. It can also do some probabilistic matching based on behaviors and connection data and will supplement this with data from third-party cross device matching specialists.

Reports include detailed traffic analysis, based on the various attributes the system collects; estimates of the importance and effectiveness of each channel; and recommended media allocations to maximize the value from ad spending.  The system doesn't analyze the impact of message or channel sequence, compare the effectiveness of different messages, or estimate the impact of messages on long-term customer outcomes. As previously mentioned, it has a partial blindspot for mobile – a major concern, given how important mobile has become – and other gaps for offline channels and results. These are problems for most algorithmic attribution products, not just Wizaly.

One definite advantage of Wizaly is price: at $5,000 to $15,000 per month, it is generally cheaper than better-known competitors. Pricing is based on traffic monitored and data stored. The company was spun off from ESV Digital in 2016 and currently has close to 50 clients worldwide.(image)



Attribution Will Be Critical for AI-Based Marketing Success

2017-10-08T07:31:32.279-04:00

I gave my presentation on Self-Driving Marketing Campaigns at the MarTech conference last week. Most of the content followed the arguments I made here a couple of weeks ago, about the challenges of coordinating multiple specialist AI systems. But prepping for the conference led me to refine my thoughts, so there are a couple of points I think are worth revisiting.The first is the distinction between replacing human specialists with AI specialists, and replacing human managers with AI managers. Visually, the first progression looks like this as AI gradually takes over specialized tasks in the marketing department:The insight here is that while each machine presumably does its job much better than the human it replaces,* the output of the team as a whole can’t fundamentally change because of the bottleneck created by the human manager overseeing the process. That is, work is still organized into campaigns that deal with customer segments because the human manager needs to think in those terms. It’s true that the segments will keep getting smaller, the content within each segment more personalized, and more tests will yield faster learning. But the human manager can only make a relatively small number of decisions about what the robots should do, and that puts severe limits on how complicated the marketing process can become.The really big change happens when that human manager herself is replaced by a robot:Now, the manager can also deal with more-or-less infinite complexity. This means we no longer need campaigns and segments and can truly orchestrate treatments for each customer as an individual. In theory, the robot manager could order her robot assistants to create custom messages and offers in each situation, based on the current context and past behaviors of the individual human involved. In essence, each customer has a personal robot following her around, figuring out what’s best for her alone, and then calling on the other robots to make it happen. Whether that's a paradise or nightmare is beyond the scope of this discussion.In my post a few weeks ago, I was very skeptical that manager robots would be able to coordinate the specialist systems any time soon.  That now strikes me as less of a barrier.  Among other reasons, I’ve seen vendors including Jivox and RevJet introduce systems that integrate large portions of the content creation and delivery workflows, potentially or actually coordinating the efforts of multiple AI agents within the process. I also had an interesting chat with the folks at Albert.ai, who have addressed some of the knottier problems about coordinating the entire campaign process. These vendors are still working with campaigns, not individual-level journey orchestration. But they are definitely showing progress.As I've become less concerned about the challenges of robot communication, I've grown more concerned about robots making the right decisions.  In other words, the manager robot needs a way to choose what the specialist robots will work on so they are doing the most productive tasks. The choices must be based on estimating the value of different options.  Creating such estimates is the job of revenue attribution.  So it turns out that accurate attribution is a critical requirement for AI-based orchestration.That’s an important insight.  All marketers acknowledge that attribution is important but most have focused their attention on other tasks in recent years.  Even vendors that do attribution often limit themselves to assigning user-selected fractions of value to different channels or touches, repla[...]



Customer Data Platforms Spread Their Wings

2018-01-13T09:46:44.831-05:00

I escaped from my cave this week to present at two conferences: the first-ever “Customer Data Platform Summit” hosted by AgilOne in Los Angeles, preceding Shop.org, and the Technology for Marketing conference in London, where BlueVenn sponsored me. I listened as much as could along the way to find what’s new with the vendors and their clients. There were some interesting developments. Broader awareness of CDP. The AgilOne event was invitation-only while the London presentation was open to any conference attendee, although BlueVenn did personally invite companies it wanted to attend. Both sets of listeners were already aware of CDPs, which isn’t something I’d expect to have seen a year or two ago. Both also had a reasonable notion of what a CDP does. But they still seemed to need help distinguishing CDPs from other types of systems, so we still have plenty more work to do in educating the market. Use of CDPs beyond marketing. People in both cities described CDPs being bought and used throughout client organizations, sometimes after marketing was the original purchaser and sometimes as a corporate project from the start. That was always a potential but it’s delightful to hear about it actually happening. The widely a CDP is used in a company, the more value the buyer gets – and the more benefit to the company’s customers. So hooray for that.CDPs in vertical markets. The AgilOne audience were all retailers, not surprisingly given AgilOne’s focus and the relation of the event to Shop.org. But I heard in London about CDPs in financial services, publishing, telecommunications, and several other industries where CDP hasn’t previously been used much. More evidence of the broader awareness and the widespread need for the solution that CDP provides.CDP for attribution. While in London I also stopped by the office of Fospha, another CDP vendor which has just become a Sponsor of the CDP Institute. They are unusual in having a focus on multi-touch attribution, something we’ve seen in a couple other CDPs but definitely less common than campaign management or personalization. That caught my attention because I just finished an analysis of artificial intelligence in journey orchestration, in which one major conclusion was that multi-touch attribution will be a key enabling technology. That needs a blog post of its own to explain, but the basic reason is AI needs attribution (specifically, estimating the incremental value of each marketing action) as a goal to optimize against when it's comparing investments in different marketing tasks  (content, media, segmentation, product, etc.)If there's a common thread here, it's that CDPs are spreading beyond their initial buyers and applications.  I’ll be presenting next week at yet another CDP-focused event, this one sponsored by BlueConic in advance of the Boston Martech Conference. Who knows what new things we'll see there?[...]



Vizury Combines Web Page Personalization with a Customer Data Platform

2017-09-16T11:13:32.295-04:00

One of the fascinating things about tracking Customer Data Platforms is the great variety among the vendors. It’s true that variety causes confusion for buyers. The CDP Institute is working to ease that pain, most recently with a blog discussion you’re welcome to join here.  But for me personally, it’s been endlessly intriguing to trace the paths that vendors have followed to become CDPs and learn where they plan to go next.Take Vizury, a Bangalore-based company that started eight years ago as an retargeting ad bidding platform. That grew into a successful business with more than 200 employees, 400 clients in 40 countries, and $30 million in funding. As it developed, the company expanded its product and, in 2015, released its current flagship, Vizury Engage, an omnichannel personalization system sold primarily to banks and insurance companies. Engage now has more than a dozen enterprise clients in Asia, expects to double that roster in the next six months, and is testing the waters in the U.S. As often happens, Vizury’s configuration reflects its origins. In their case, the most obvious impact is on the scope of the system, which includes sophisticated Web page personalization – something very rare in the CDP world at large. In a typical implementation, Vizury builds the client’s Web site home page.  That gives it complete control of how each visitor is handled. The system doesn't take over the rest of the client's Web site, although it can inject personalized messages on those pages through embedded tags.In both situations, Vizury is identifying known visitors by reading a hashed (i.e., disguised) customer ID it has placed on the visitor’s browser cookie. When a visitor enters the site, a Vizury tag sends the hased ID to the Vizury server, which looks up the customer, retrieves a personalized message, and sends it back to the browser.  The messages are built by templates which can include variables such as first name and calculated values such as a credit limit.  Customer-specific versions may be pregenerated to speed response; these are updated as new data is received about each customer. It takes ten to fifteen seconds for new information to make its way through the system and be reflected in output seen by the visitor.Message templates are embedded in what Vizury calls an engagement, which is associated with a segment definition and can include versions of the same message for different channels. One intriguing strength of Vizury is machine-learning-based propensity models that determine each customer’s preferred channel. This lets Vizury send outbound messages through the customer’s preferred channel when there’s a choice. Outbound options include email, SMS, Facebook ads, and programmatic display ads. These can be sent on a fixed schedule or be triggered when the customer enters or leaves a segment. Bids for Facebook and display ads can be managed by Vizury’s own bidding engine, another vestige of its origins. Inbound options include on-site and browser push messages.If a Web visitor is eligible for multiple messages, Vizury currently just picks one at random. The vendor is working an automated optimization system that will pick the best message for each customer instead. There’s no way to embed a sequence of different messages within a given engagement, although segment definitions could push customers from one engagement to the next. Users do have the ability to specify how often a customer will be sent the same message, block messages the customer has already respo[...]



B2B Marketers Are Buying Customer Data Platforms. Here's Why.

2017-09-10T10:22:13.093-04:00

I’m currently drafting a paper on use of Customer Data Platforms by B2B SaaS marketers.  The topic is more intriguing than it sounds because it raises the dual questions of  why CDPs haven’t previously been used much by B2B SaaS companies and what's changed.  To build some suspense, let’s first review who else has been buying CDPs.We can skip over the first 3.8 billion years of life on earth, when the answer is no one. When true CDPs first emerged from the primordial ooze, their buyers were concentrated among B2C retailers. That’s not surprising, since retailers have always been among the data-driven marketers. They’re the R in BRAT (Banks, Retailers, Airlines, Telcos), the mnemonic I’ve long used to describe the core data-driven industries*. What's more surprising is that the B's, A's, and T's weren't also early CDP users.  I think the reason is that banks, airlines, and telcos all capture their customers’ names as part of their normal operations. This means they’ve always had customer data available and thus been able to build extensive customer databases without a CDP.By contrast, offline retailers must work hard to get customer names and tie them to transactions, using indirect tools such as credit cards and loyalty programs. This means their customer data management has been less mature and more fragmented. (Online retailers do capture customer names and transactions operationally.  And, while I don’t have firm data, my impression is that online-only retailers have been slower to buy CDPs than their multi-channel cousins. If so, they're the exception that proves the rule.)Over the past year or two, as CDPs have moved beyond the early adopter stage, more BATs have in fact started to buy CDPs.  As a further sign of industry maturity, we’re now starting to see CDPs that specialize in those industries. Emergence of such vertical systems is normal: it happens when demand grows in new segments because the basic concepts of a category are widely understand.  Specialization gives new entrants as a way to sell successfully against established leaders.  Sure enough, we're also seeing new CDPs with other types of specialties, such as products from regional markets (France, India, and Australia have each produced several) and for small and mid-size organizations (not happening much so far, but there are hints).And, of course, the CDP industry has always been characterized by an unusually broad range of product configurations, from systems that only build the central database to systems that provide a database, analytics, and message selection; that's another type of specialization.  I recently proposed a way to classify CDPs by function on the CDP Institute blog.**  B2B is another vertical. B2B marketers have definitely been slow to pick up on CDPs, which may seem surprising given their frenzied adoption of other martech. I’d again explain this in part by the state of the existing customer data: the more advanced B2B marketers (who are the most likely CDP buyers) nearly all have a marketing automation system in place. The marketers' initial assumption would be that marketing automation can assemble a unified customer database, making them uninterested in exploring a separate CDP.  Eventually they'd discover that nearly all B2B marketing automation systems are very limited in their data management capabilities.  That’s happening now in many cases – and, sure enough, we’re now seeing more interest among B2B marketers in[...]



AgilOne Adds New Flexibility to An Already-Powerful Customer Data Platform

2017-08-31T13:04:02.366-04:00


It’s more than four years since my original review of AgilOne, a pioneering Customer Data Platform. As you might imagine, the system has evolved quite a bit since then. In fact, the core data management portions have been entirely rebuilt, replacing the original fixed data model with a fully configurable model that lets the system easily adapt to each customer.

The new version uses a bouquet of colorfully-named big data technologies (Kafka, Parquet, Impala, Spark, Elastic Search, etc.) to support streaming inputs, machine learning, real time queries, ad hoc analytics, SQL access, and other things that don’t come naturally to Hadoop. It also runs on distributed processors that allow fast scaling to meet peak demands. That’s especially important to AgilOne since most of its clients are retailers whose business can spike sharply on days like Black Friday.

In other ways, though, AgilOne is still similar to the system I reviewed in 2013. It still provides sophisticated data quality, postal processing, and name/address matching, which are often missing in CDPs designed primarily for online data. It still has more than 300 predefined attributes for specialized analytics and processing, although the system can function without them. It still includes predictive models and provides a powerful query builder to create audience segments. Campaigns are still designed to deliver one message, such as an email, although users could define campaigns with related audiences to deliver a sequence of messages. There’s still a “Customer360” screen to display detailed information about individual customers, including full interaction history.

But there’s plenty new as well. There are more connectors to data sources, a new interface to let users add custom fields and calculations for themselves, and workflow diagrams to manage data processing flows. Personalization has been enhanced and the system exposes message-related data elements including product recommendations and the last products browsed, purchased, and abandoned. AgilOne now supports Web, mobile, and social channels and offers more options for email delivery. A/b tests have been added while analytics and reporting have been enhanced.

What should be clear is that AgilOne has an exceptionally broad (and deep) set of features. This puts it at one end of the spectrum of Customer Data Platforms. At the other end are CDPs that build a unified, sharable customer database and do nothing else. In between are CDPs that offer some subset of what AgilOne offers: advanced identity management, offline data support, predictive analytics, segmentation, multi-channel campaigns, real time interactions, advanced analytics, and high scalability. This variety is good for buyers, since it means there’s a better chance they can find a system that matches their needs. But it’s also confusing, especially for buyers who are just learning about CDPs and don’t realize how much they can differ. That confusion is something we’re worrying about a lot at the CDP Institute right now. If you have ideas for how to deal with it, let me know.(image)



Self-Driving Marketing Campaigns: Possible But Not Easy

2017-08-25T20:18:54.133-04:00

A recent Forrester study found that most marketers expect artificial intelligence to take over the more routine parts of their jobs, allowing them to focus on creative and strategic work.That’s been my attitude as well. More precisely, I see AI enabling marketers to provide the highly tailored experiences that customers now demand. Without AI, it would be impossible to make the number of decisions necessary to do this. In short, complexity is the problem, AI is the solution, and we all get Friday afternoons off. Happy ending.But maybe it's not so simple. Here’s the thing: we all know that AI works because it can learn from data. That lets it make the best choice in each situation, taking into account many more factors than humans can build into conventional decision rules. We also all know that machines can automatically adjust their choices as they learn from new data, allowing them to continuously adapt to new situations.Anyone who's dug a bit deeper knows two more things:self-adjustment only works in circumstances similar to the initial training conditions. AI systems don’t know what to do when they’re faced with something totally unexpected. Smart developers build their systems to recognize such situations, alert human supervisors, and fail gracefully by taking an action that is likely to be safe. (This isn’t as easy as it sounds: a self-driving car shouldn’t stop in the middle of an intersection when it gets confused.)AI systems of today and the near future are specialists. Each is trained to do a specific task like play chess, look for cancer in an X-ray, or bid on display ads. This means that something like a marketing campaign, which involves many specialized tasks, will require cooperation of many AIs. That’s not new: most marketing work today is done by human specialists, who also need to cooperate. But while cooperation comes naturally to (most) humans, it needs to be purposely added as a skill to an AI.*By itself, this more nuanced picture isn’t especially problematic. Yes, marketers will need multiple AIs and those AIs will need to cooperate. Maintaining that cooperation will be work but presumably can itself eventually be managed by yet another specialized AI. But let’s put that picture in a larger context.The dominant feature of today’s business environment is accelerating change. AI itself is part of that change but there are other forces at play: notably, the “personal network effect” that drives companies like Facebook, Google, and Amazon to hoard increasing amounts of data about individual consumers. These forces will impose radical change on marketers’ relations with customers. And radical change is exactly what the marketers’ AI systems will be unable to handle. So now we have a problem. It’s easy – and fun – to envision a complex collection of AI-driven components collaborating to create fully automated, perfectly personalized customer experiences. But that system will be prone to frequent failures as one or another component finds itself facing conditions it wasn’t trained to handle. If the systems are well designed (and we’re lucky), the components will shut themselves down when that happens. If we’re not so lucky, they’ll keep running and return increasingly inappropriate results. Yikes.Where do we go from here? One conclusion would be that there’s a practical limit to how much of the marketing process can really be taken over by AI. Some people might find tha[...]



Treasure Data Offers An Easy-to-Deploy Customer Data Platform

2017-08-20T17:42:56.637-04:00

One of my favorite objections from potential buyers of Customer Data Platforms is that CDPs are simply “too good to be true”.   It’s a reasonable response from people who hear CDP vendors say they can quickly build a unified customer database but have seen many similar-seeming projects fail in the past.  I like the objection because I can so easily refute it by pointing to real-world case histories where CDPs have actually delivered on their promise.One of the vendors I have in mind when I’m referring to those histories is Treasure Data. They’ve posted several case studies on the CDP Institute Library, including one where data was available within one month and another where it was ready in two hours.  Your mileage may vary, of course, but these cases illustrate the core CDP advantage of using preassembled components to ingest, organize, access, and analyze data. Without that preassembly, accessing just one source can take days, weeks, or even months to complete.Even in the context of other CDP systems, Treasure Data stands out for its ability to connect with massive data sources quickly. The key is a proprietary data format that lets access new data sources with little explicit mapping: in slightly more technical terms, Treasure Data uses a columnar data structure where new attributes automatically appear as new columns. It also helps that the system runs on Amazon S3, so little time is spent setting up new clients or adding resources as existing clients grow. Treasure Data ingests data using open source connectors Fluentd for streaming inputs and embulk  for batch transfers. It provides deterministic and probabilistic identity matching, integrated machine learning, always-on encryption, and precise control over which users can access which pieces of data. One caveat is there’s no user interface to manage this sort of processing: users basically write scripts and query statements. Treasure Data is working on a user interface to make this easier and to support complex workflows.Data loaded into Treasure Data can be accessed through an integrated reporting tool and an interface that shows the set of events associated with a customer.  But most users will rely on prebuilt connectors for Python, R, Tableau, and Power BI.  Other SQL access is available using Hive, Presto and ODBC. While there’s no user interface for creating audiences, Treasure Data does provide the functions needed to assign customers to segments and then push those segments to email, Facebook, or Google. It also has an API that lets external systems retrieve the list of all segments associated with a single customer.   Treasure Data clearly isn’t an all-in-one solution for customer data management.  But organizations with the necessary technical skills and systems can find it hugely increases the productivity of their resources.  The company was founded in 2011 and now has over 250 clients, about half from the data-intensive worlds of games, ecommerce, and ad tech. Annual cost starts around $100,000 per year.  The actual pricing models vary with the situation but are usually based on either the number of customer profiles being managed or total resource consumption.[...]



Blueshift CDP Adds Advanced Features

2017-07-17T17:49:03.880-04:00

I reviewed Blueshift in June 2015, when the product had been in-market for just a few months and had a handful of large clients. Since then they’ve added many new features and grown to about 50 customers. So let’s do a quick update.Basically, the system is still what it was: a Customer Data Platform that includes predictive modeling, content creation, and multi-step campaigns. Customer data can be acquired through the vendor’s own Javascript tags, mobile SDK (new since 2015), API connectors, or file imports. Blueshift also has collection connectors for Segment, Ensighten, mParticle, and Tealium. Product data can load through file imports, a standard API, or a direct connector to DemandWare.As before, Blueshift can ingest, store and index pretty much any data with no advance modeling, using JSON, MongoDB, Postgres, and Kafka. Users do have to tell source systems what information to send and map inputs to standard entities such as customer name, product ID, or interaction type. There is some new advanced automation, such as tying related events to a transaction ID. The system’s ability to load and expose imported data in near-real-time remains impressive. Blueshift will stitch together customer identities using multiple identifiers and can convert anonymous to known profiles without losing any history. Profiles are automatically enhanced with product affinities and scores for purchase intent, engagement, and retention. The system had automated predictive modeling when I first reviewed it, but has now added machine- learning-based product recommendations. In fact, it recommendations are exceptionally sophisticated. Features include a wide range of rule- and model-based recommendation methods, an option for users to create custom recommendation types, and multi-product recommendation blocks that mix recommendations based on different rules. For example, the system can first pick a primary recommendation and then recommend products related to it. To check that the system is working as expected, users can preview recommendations for specified segments or individuals. The segment builder in Blueshift doesn’t seem to have changed much since my last review: users select data categories, elements, and values used to include or exclude segment members. The system still shows the counts for how many segment members are addressable via email, display ads, push, and SMS. On the other hand, the campaign builder has expanded significantly. The previous form-based campaign builder has been replaced by a visual interface that allows branching sequences of events and different treatments within each event.  These treatments include thumbnails of campaign creative and can be in different channels. That's special because many vendors still limit campaigns to a single channel. Campaigns can be triggered by events, run on fixed schedules, or executed once. Each treatment within an event has its own selection conditions, which can incorporate any data type: previous behaviors, model scores, preferred communications channels, and so on. Customers are tested against the treatment conditions in sequence and assigned to the first treatment they match. Content builders let users create templates for email, display ads, push messages, and SMS messages. This is another relatively rare feature. Templates can include personalized offers based on predictive models or recommendations. The system c[...]



Lexer Customer Data Platform Grows from Social Listening Roots

2017-07-10T07:43:01.017-04:00

Customer Data Platform vendors come from many places, geographically and functionally. Lexer is unusual in both ways, having started in Australia as a social media listening platform. About two years ago the company refocused on building customer profiles with data from all sources. It quickly added clients among many of Australia’s largest consumer-facing brands including Qantas airlines and Westpac bank.

Social media is still a major focus for Lexer. The system gathers data from Facebook and Instagram public pages and from the Twitter follower lists of clients’ brands. It analyzes posts and follows to understand consumer interests, assigning people to “tribes” such as “beach lifestyle” and personas such as “sports and fitness”.  It supplements the social inputs with information from third party data sources, location history, and a clients’ own email, Web site, customer service, mobile apps, surveys, point of sale, and other systems. Matching is strictly deterministic, although links based on different matches can be chained together to unify identities across channels.  The system can also use third party data to add connections it can’t be made directly.

Lexer ingests data in near-real-time, making social media posts available to users within about five minutes. It can react to new data by moving customers into different tribes or personas and can send lists of those customers to external systems for targeting in social, email, or other channels.  There are standard integrations with Facebook, Twitter, and Google Adwords advertising campaigns. External systems can also use an API to read the Lexer data, which is stored in Amazon Elastic Search.

Unusally for a CDP, Lexer also provides a social engagement system that lets service agents engage directly with customers. This system displays the customer’s profile including a detailed interaction history and group memberships. Segment visualization is unusually colorful and attractive.

Lexer has about forty clients, nearly all in Australia. It is just entering the U.S. market and hasn’t set U.S. prices.(image)