Subscribe: Customer Experience Matrix
http://customerexperiencematrix.blogspot.com/atom.xml
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
based  cdp  cdps  customer data  customer  customers  data  it’s  marketing  messages  new  system  systems  time  web 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Customer Experience Matrix

Customer Experience Matrix



This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is Principal at Raab Associates Inc. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a comp



Updated: 2017-12-10T22:10:41.254-05:00

 



Here's a Game to Illustrate Strategic Planning

2017-12-06T23:14:54.242-05:00

My wife is working on a Ph.D. in education and recently took a course on strategic planning for academic institutions. Her final project included creating a game to help illustrate the course lessons. What she came up with struck me as applicable to planning in all industries, so I thought I’d share it here.The fundamental challenge she faced in designing the game was to communicate key concepts about strategic planning. The main message was that strategic planning is about choosing among different strategies to find the one that best matches available resources. That’s pretty abstract, so she made it concrete by presenting game players with a collection of alternative strategies, each on a card of its own. She then created a second set of cards that listed actions available to the players. Each action card showed which strategies the action supported and what resources it required. There were four resources: money, faculty, students, and administrative staff.  To keep things simple, she assumed that total resources were fixed, that each strategy contributed equally to the ultimate goal, and that each action contributed equally to whichever strategies it supported.  In other words, the components of the game were:- One goal. In the case of my wife’s game, the goal was to achieve a “top ten” ranking for a particular department within a university. (It was a good goal because it was easily understood and measured.)- Four strategies. In my wife’s game, the options were to build up the department, cooperate with other departments at the university, cooperate with other universities, or promote the department to the media and general public. - A dozen actions. Each action supported at least one strategy (scored with 1 or 0) and consumed some quantity of the four resources (scored from 0 to 3 for each resource). Actions were things like “run a conference”, “set up a cross-disciplinary course” and “finance faculty research”.- Four resources, each assigned an available quantity (i.e., budget).As you can tell from the description, the action cards are the central feature of the game.  Here's a concrete example, where each row represents one action card:The fundamental game mechanism was to pick a set of actions.  These were scored by counting how how many supported each strategy and how many resources they consumed.  The resource totals couldn't exceed the available quantities for each resource.  The table below shows scoring for a set of three actions.  In this particular example, all three actions support "cooperate with other departments", while two support "build department" and one each supports "cooperate with other universities" and "promote to public".  Resource needs were money=8, faculty=6, student=5 and administration= 1.  Someone with these cards could choose "cooperate with other departments" as the best strategy -- if the resources permitted.  But if they were limited to 7 points for each resource, they might switch the "fund scholarship" card for the "extracurricular enrichment" card, which uses less money even though it consumes more of the other resources.  That works because, with a budget of 7 for each resource, the player can afford to increase spending in the other categories.As this example suggests, the goal of the game is to get players to think about the relations among strategies, actions and resources, and in particular how to choose actions that fit with strategies and resources.Although the basic scoring approach is built into the game, there are many ways my wife could have played it: - Predefine available resources and let different players draw different action cards.  They would then decide which strategy best fit the available cards and resources.  - Give different strategy cards to different players and put all action cards face up on the table.  Players then each choose one action card in turn, trying to assemble the best set of actions for their assigned strategy. - Randomly select the r[...]



2017 Retrospective: Things I Didn't Predict

2017-12-02T10:32:32.131-05:00

It’s the time of year when people make predictions. It’s not my favorite exercise: the best prediction is always that things will continue as they are, but what’s really interesting is change – and significant change is inherently unpredictable. (See Nassim Nicholas Taleb's The Black Swan  and Philip Tetlocks' Superforecasting on those topics.)So I think instead I’ll take a look at surprising changes that already happened. I’ve covered many of these in the daily newsletter of the Customer Data Platform Institute (click here to subscribe for free). In no particular order, things I didn’t quite expect this year include:- Pushback against the walled garden vendors (Facebook, Google, Amazon, Apple, etc.) Those firms continue to dominate life online, and advertising and ecommerce in particular. (Did you know that Amazon accounted for more than half of all Black Friday sales last week?) But the usual whining about their power from competitors and ad buyers has recently been joined by increasing concerns among the public, media, and government. What’s most surprising is it took so long for the government to recognize the power those companies have accrued and the very real threat they pose to governmental authority. (See Martin Gurri’s The Revolt of the Public for by far the best explanation I’ve seen of how the Internet affects politics.)  On the other hand, the concentrated power of the Web giants means they could easily converted into agents of control if the government took over.  Don’t think this hasn’t occurred to certain (perhaps most) people in Washington.  Perhaps that’s why they’re not interested in breaking them up.  Consistent with this thought: the FCC plan to end Net Neutrality will give much more power to cable companies, which as highly regulated utilities have a long history of working closely with government authorities. It’s pitifully easy to imagine the cable companies as enthusiastic censors of unapproved messages.- Growth in alternative personal data sources. Daily press announcements include a constant stream of news from companies that have found some new way to accumulate data about where people are going, who they meet, what they’re buying, what they plan to buy, what content they’re consuming, and pretty much everything else. Location data is especially common, derived from mobile apps that most people surely don’t realize are tracking them. But I’ve seen other creative approaches such as scanning purchase receipts (in return for a small financial reward, of course) and even using satellite photos to track store foot traffic. In-store technology such as beacons and wifi track behaviors even more precisely, and I’ve seen some fascinating (and frightening) claims about visual technologies that capture peoples’ emotions as well as identities. Combine those technologies with ubiquitous high resolution cameras, both mounted on walls and built into mobile devices, and the potential to know exactly who does and thinks what is all too real. Cross-device matching and cross-channel identity matching (a.k.a. “onboarding”) are part of this too.- Growth in voice interfaces. Voice interfaces don't have the grand social implications of the preceding items but it’s still worth noting that voice-activated devices (Amazon Alexa and friends) and interfaces (Siri, Cortana, etc.) have grown more quickly than I anticipated. The change does add new challenges for marketers who were already having a hard time figuring out where to put ads on a mobile phone screen.  With voice, they have no screen at all.  Having your phone read ads to you, or perhaps worse sing a catchy jingle, will be pretty annoying. To take a more positive view: voice interfaces will force innovation in how marketers sell and put a premium on agent-based services that make more decisions for consumers. Of course, that's only positive if the agents actually work in consumers’ interest. If the agents also serve other mas[...]



Do Customer Data Platforms Need Identity Matching? The Answer May Surprise You.

2017-11-22T14:38:48.338-05:00

I spend a lot of time with vendors trying to decide whether they are, or should be, a Customer Data Platform. I also spend a lot of time with marketers trying to decide which CDPs might be right for them. One topic that’s common in both discussions is whether a CDP needs to include identity resolution – that is, the ability to decide which identifiers (name/address, phone number, email, cookie ID, etc.) belong to the same person.It seems like an odd question. After all, the core purpose of a CDP is to build a unified customer database, which requires connecting those identifiers so data about each customer can be brought together. So surely identity resolution is required.Turns out, not so much. There are actually several reasons.- Some marketers don’t need it. Companies that deal only in a single channel often have just one identifier per customer.  For example, Web-only companies might use just a cookie ID.  True, channel-specific identifiers sometimes change (e.g., cookies get deleted).  But there may be no practical way to link old and new identifiers when that happens, or marketers may simply not care.  A more common situation is companies have already built an identity resolution process, often because they’re dealing with customers who identify themselves by logging in or who transact through accounts. Financial institutions, for example, often know exactly who they’re dealing with because all transactions are associated with an account that’s linked to a customer's master record (or perhaps not linked because the customer prefers it that way). Even when identity resolution is complicated,  mature companies often (well, sometimes) have mature processes to apply a customer ID to all data before it reaches the CDP. In any of these cases, the CDP can use the ID it’s given and not need an identity resolution process of its own.- Some marketers can only use it if it’s perfect. Again, think of a financial institution: it can’t afford to guess who’s trying to take money out of an account, so it requires the customer to identify herself before making a transaction. In many other circumstances, absolute certainty isn’t required but a false association could be embarrassing or annoying enough that the company isn’t willing to risk it. In those cases, all that’s needed is an ability to “stitch” together identifiers based on definite connections. That might mean two devices are linked because they both sent emails using the same email address, or an email and phone number linked because someone entered them both into a registration form. Almost every CDP has this sort of “deterministic” linking capability, which is so straightforward that it barely counts as identity resolution in the broader sense.- Specialized software already exists. The main type of matching that CDPs do internally – beyond simple stitching – is “fuzzy” matching.  This applies rules to decide when two similar-looking records really refer to the same person. It's most commonly applied to names and postal addresses, which are often captured inconsistently from one source to the next. It might sometimes be applied to other types of data, such as different forms of an email address (e.g. draab@raabassociates.com and draab@raabassociatesinc.com). The technology for this sort of matching gets very complicated very quickly, and it’s something that specialized vendors offer either for purchase or as a service. So CDP vendors can quite reasonably argue they needn’t build this for themselves but should simply integrate an external product.- Much identity resolution requires external data. This is the heart of the matter.  Most of the really interesting identity resolution today involves linking different devices or linking across channels when there’s no known connection. This sort of “probabilistic” linking is generally done by vendors who capture huge amounts of behavioral data by tracking visitors[...]



No, Users Shouldn't Write Their Own Software

2017-11-10T11:58:03.654-05:00

Salesforce this week announced “myEinstein” self-service artificial intelligence features to let non-technical users build predictive models and chatbots. My immediate reaction was that's a bad idea: top-of-the-head objections include duplicated effort, wasted time, and the potential for really bad results. I'm sure I could find other concerns if I thought about it, but today’s world brings a constant stream of new things to worry about, so I didn’t bother. But then today’s news described an “Everyone Can Code” initiative from Apple, which raised essentially the same issue in even clearer terms: should people create their own software?I thought this idea had died a well-deserved death decades ago. There was a brief period when people thought that “computer literacy” would join reading, writing, and arithmetic as basic skills required for modern life. But soon they realized that you can run a computer using software someone else wrote!* That made the idea of everyone writing their own programs seem obviously foolish – specifically because of duplicated effort, wasted time, and the potential for really bad results. It took IT departments much longer to come around the notion of buying packaged software instead of writing their own but even that battle has now mostly been won. Today, smart IT groups only create systems to do things that are unique to their business and provide significant competitive advantage.But the idea of non-technical workers creating their own systems isn't just about packaged vs. self-written software. It generally arises from a perception that corporate systems don’t meet workers’ needs: either because the corporate systems are inadequate or because corporate IT is hard to work with and has other priorities. Faced with such obstacles to getting their jobs done, the more motivated and technically adept users will create their own systems, often working with tools like spreadsheets that aren’t really appropriate but have the unbeatable advantage of being available. Such user-built systems frequently grow to support work groups or even departments, especially at smaller companies. They’re much disliked by corporate IT, sometimes for turf protection but mostly because they pose very real dangers to security, compliance, reliability, and business continuity. Personal development on a platform like myEinstein poses many of the same risks, although the data within Salesforce is probably more secure than data held on someone’s personal computer or mobile phone.Oddly enough, marketing departments have been a little less prone to this sort of guerilla IT development than some other groups. The main reason is probably that modern marketing revolves around customer data and customer-facing systems, which are still managed by a corporate resource (not necessarily IT: could be Web development, marketing ops, or an outside vendor). In addition, the easy availability of Software as a Service packages has meant that even rogue marketers are using software built by professionals. (Although once you get beyond customer data to things like planning and budgeting, it’s spreadsheets all the way.)This is what makes the notion of systems like myEinstein so dangerous (and I don’t mean to pick on Salesforce in particular; I’m sure other vendors have similar ideas in development). Because those systems are directly tied into corporate databases, they remove the firewall that (mostly) separated customer data and processes from end-user developers. This opens up all sorts of opportunities for well-intentioned workers to cause damage.But let’s assume there are enough guardrails in place to avoid the obvious security and customer treatment risks. Personal systems have a more fundamental problem: they’re personal. That means they can only manage processes that are within the developer’s personal control. But customer experiences span multiple users, departments, and systems. This means th[...]



TrenDemon and Adinton Offer Attribution Options

2017-11-06T13:41:28.726-05:00

I wrote a couple weeks ago about the importance of attribution as a guide for artificial intelligence-driven marketing. One implication was I should pay more attention to attribution systems. Here’s a quick look at two products that tackle different parts of the attribution problem: content measurement and advertising measurement.TrenDemonLet’s start with TrenDemon. Its specialty is measuring the impact of marketing content on long B2B sales cycles. It does this by placing a tag on client Web sites to identify visitors and track the content they consume, and then connecting client CRM systems to find which visitor companies ultimately made a purchase (or reached some other user-specified goal). Visitors are identified by company using their IP address and as individuals by tracking cookies. TrenDemon does a bit more than correlate content consumption and final outcomes. It also identifies when each piece of content is consumed, distinguishing between the start, middle, and end of the buying journey. It also looks at other content metrics such as how many people read an item, how much time they spend with it, and how many read something else after they’re done. These and other inputs are combined to generate an attribution score for each item. The system uses the score to identify the most effective items for each journey stage and to recommend which items should be presented in the future. Pricing for TrenDemon starts at $800 per month. The system was launched in early 2015 and is currently used by just over 100 companies.Adinton Next we have Adinton, a Barcelona-based firm that specializes in attribution for paid search and social ads. Adinton has more than 55 clients throughout Europe, mostly selling travel and insurance online. Such purchases often involve multiple Web site visits but still have a shorter buying cycle than complex B2B transactions. Adinton has pixels to capture Web ad impressions as well as Web site visits. Like TrenDemon, it tracks site visitors over time and distinguishes between starting, middle, and finishing clicks. It also distinguishes between attributed and assisted conversions. When possible, it builds a unified picture of each visitor across devices and channels.The system uses this data to calculate the cost of different types of click types, which it combines to create a “true” cost per action for each ad purchase. It compares this with the clients’ target cost per actions to determine where they are over- or under-investing. Adinton has API connections to gather data from Google AdWords, Facebook Ads, Bing Ads, AdRoll, RocketFuel, and other advertising channels. An autobidding system can currently adjust bids in AdWords and will add Facebook and Bing adjustments in the near future. The system also does keyword research and click fraud identification. Pricing is based on number of clicks and starts as low as $299 per month for attribution analysis, with additional fees for autobidding and click fraud modules. Adinton was founded in 2013.  It launched its first product in 2014 although attribution came later.Further ThoughtsThese two products are chosen almost at random, so I wouldn’t assign any global significance to their features. But it’s still intriguing that both add a first/middle/last buying stage to the analysis. It’s also interesting that they occupy a middle ground between totally arbitrary attribution methodologies, such as first touch/last touch/fractional credit, and advanced algorithmic methods that attempt to calculate the true incremental impact of each touch. (Note that neither TrenDemon nor Adinton’s summary metric is presented as estimating incremental value.)  Of course, without true incremental value, neither system can claim to develop an optimal spending allocation. One interpretation might be that few marketers are ready for a full-blown algorithmic approach but many are open to something more than the [...]



Flytxt Offers Broad and Deep Customer Management

2017-10-29T19:43:23.601-04:00

Some of the most impressive marketing systems I’ve seen have been developed for mobile phone marketing, especially for companies that sell prepaid phones.  I don’t know why: probably some combination of intense competition, easy switching when customers have no subscription, location as a clear indicator of varying needs, immediately measurable financial impact, and lack of legacy constraints in a new industry. Many of these systems have developed outside the United States, since  prepaid phones have a smaller market share here than elsewhere.Flytxt is a good example. Founded in India in 2008, its original clients were South Asian and African companies whose primary product was text messaging. The company has since expanded in all directions: it has clients in 50+ countries including South America and Europe plus a beachhead in the U.S.; its phone clients sell many more products than text; it has a smattering of clients in financial services and manufacturing; and it has corporate offices in Dubai and headquarters in the Netherlands. The product itself is equally sprawling. Its architecture spans what I usually call the data, decision, and delivery layers, although Flytxt uses different language. The foundation (data) layer includes data ingestion from batch and real-time sources with support for structured, semi-structured and unstructured data, data preparation including deterministic identity stitching, and a Hadoop-based data store. The intelligence (decision) layer provides rules, recommendations, visualization, packaged and custom analytics, and reporting. The application (delivery) layer supports inbound and outbound campaigns, a mobile app, and an ad server for clients who want to sell ads on their own Web sites. To be a little more precise, Flytxt’s application layer uses API connectors to send messages to actual delivery systems such as Web sites and email engines.  Most enterprises prefer this approach because they have sophisticated delivery systems in place and use them for other purposes beyond marketing messaging.And while we’re being precise: Flytxt isn’t a Customer Data Platform because it doesn’t give external systems direct access its unified customer data store.  But it does provide APIs to extract reports and selected data elements and can build custom connectors as needed. So it could probably pass as a CDP for most purposes.Given the breadth of Flytxt’s features, you might expect the individual features to be relatively shallow. Not so. The system has advanced capabilities throughout. Examples include anonymizing personally identifiable information before sharing customer data; multiple language versions attached to the one offer; rewards linked to offers; contact frequency limits by channel across all campaigns; rule- and machine learning-based recommendations; six standard predictive models plus tools to create custom models; automated control groups in outbound campaigns; real-time event-based program triggers; and a mobile app with customer support, account management, chat, personalization, and transaction capabilities. The roadmap is also impressive, including automated segment discovery and autonomous agents to find next best actions. What particularly caught my eye was Flytxt’s ability to integrate context with offer selection.  Real-time programs are connected to touchpoints such as Web site.  When a customer appears, Flytxtidentifies the customer, looks up her history and segment data, and infers intent from the current behavior and context (such as location), and returns the appropriate offer for the current situation. The offer and message can be further personalized based on customer data.This ability to tailor behaviors to the current context is critical for reacting to customer needs and taking advantage of the opportunities those needs create. It’s not unique to Flytxt but it's als[...]



When to Use a Proof of Concept in Marketing Software Selection -- And When Not

2017-10-22T16:55:51.230-04:00

“I used to hate POCs (Proof of Concepts) but now I love them,” a Customer Data Platform vendor told me recently. “We do POCs all the time,” another said when I raised the possibility on behalf of a client.Two comments could be a coincidence.  (Three make a Trend.)  But, as the first vendor indicated, POCs have traditionally been something vendors really disliked. So even the possibility that they’ve become more tolerable is worth exploring.We should start by defining the term.  Proof of Concept is a demonstration that something is possible. In technology in general, the POC is usually an experimental system that performs a critical function that had not previously been achieved.  A similar definition applies to software development. In the context of marketing systems, though, a POC is usually not so much an experiment as a partial implementation of an existing product.  What's being proven is the system's ability to execute key functions on the buyer's own data and/or systems. The distinction is subtle but important because it puts the focus on meeting the client's needs.   Of course, software buyers have always watched system demonstrations.  Savvy buyers have insisted that demonstrations execute scenarios based on their own business processes.  A carefully crafted set of scenarios can give a clear picture of how well a system does what the client wants.  Scenarios are especially instructive if the user can operate the system herself instead of just watching a salesperson.  What scenarios don’t illustrate is loading a buyer’s data into the system or the preparation needed to make that data usable. That’s where the POC comes in.The cost of loading client data was the reason most vendors disliked POCs. Back in the day, it required detailed analysis of the source data and hand-tuning of the transformation processes to put the data into the vendor’s database.  Today this is much easier because source systems are usually more accessible and marketing systems – at least if they’re Customer Data Platforms – have features that make transformation and mapping much more efficient. The ultimate example of easier data loads is the one-click connection between many marketing automation and CRM “platforms” and applications that are pre-integrated with those platforms. The simplicity is possible because the platforms and the apps are cloud-based, Software as a Service products.  This means there are no custom implementations or client-run systems to connect. Effortless connections let many vendors to offer free trials, since little or no vendor labor is involved in loading a client’s data.  In fact, free trials are problematic precisely because so little work goes into setting them up. Some buyers are diligent about testing their free trial system and get real value from the experience. But many set up a free trial and then don't use it, or use it briefly without putting in the effort to learn how the system works.  This means that all but the simplest products don’t get a meaningful test and users often underestimate the value of a system because they haven’t learned what it can do.POCs are not quite the same as free trials because they require more effort from the vendor to set up.  In return, most vendors will require a corresponding effort from the buyer to test the POC system.  On balance that’s a good thing since it ensures that both parties will learn from the project.Should a POC be part of every vendor selection process? Not at all.  POCs answer some important questions, including how easily the vendor can load source data and what it’s like to use the system with your own data.  A POC makes sense when those are critical uncertainties.  But it’s also possible to answer some of those questions without a P[...]



Wizaly Offers a New Option for Algorithmic Attribution

2017-10-16T15:43:59.616-04:00

Wizaly is a relatively new entrant in the field of algorithmic revenue attribution – a function that will be essential for guiding artificial-intelligence-driven marketing of the future. Let’s take a look at what they do.

First a bit of background: Wizaly is a spin-off of Paris-based performance marketing agency ESV Digital (formerly eSearchVision). The agency’s performance-based perspective meant it needed to optimize spend across the entire customer journey, not simply use first- or last-click attribution approaches which ignore intermediate steps on the path to purchase. Wizaly grew out of this need.

Wizaly’s basic approach to attribution is to assemble a history of all messages seen by each customer, classify customers based on the channels they saw, compare results of customers whose experience differs by just one channel, and attribute any difference in results to that channel   For example, one group of customers might have seen messages in paid search, organic search, and social; another might have seen messages in those channels plus display retargeting. Any difference in performance would be attributed to display retargeting.

This is a simplified description; Wizaly is also aware of other attributes such as the profiles of different customers, traffic sources, Web site engagement, location, browser type, etc. It apparently factors some or all of these into its analysis to ensure it is comparing performance of otherwise-similar customers. It definitely lets users analyze results based on these variables so they can form their own judgements.

Wizaly gets its data primarily from pixels it places on ads and Web pages. These drop cookies to track customers over time and can track ads that are seen, even if they’re not clicked, as well as detailed Web site behaviors. The system can incorporate television through an integration with Realytics, which correlates Web traffic with when TV ads are shown. It can import ad costs and ingest offline purchases to use in measuring results. The system can stitch together customer identities using known identifiers. It can also do some probabilistic matching based on behaviors and connection data and will supplement this with data from third-party cross device matching specialists.

Reports include detailed traffic analysis, based on the various attributes the system collects; estimates of the importance and effectiveness of each channel; and recommended media allocations to maximize the value from ad spending.  The system doesn't analyze the impact of message or channel sequence, compare the effectiveness of different messages, or estimate the impact of messages on long-term customer outcomes. As previously mentioned, it has a partial blindspot for mobile – a major concern, given how important mobile has become – and other gaps for offline channels and results. These are problems for most algorithmic attribution products, not just Wizaly.

One definite advantage of Wizaly is price: at $5,000 to $15,000 per month, it is generally cheaper than better-known competitors. Pricing is based on traffic monitored and data stored. The company was spun off from ESV Digital in 2016 and currently has close to 50 clients worldwide.(image)



Attribution Will Be Critical for AI-Based Marketing Success

2017-10-08T07:31:32.279-04:00

I gave my presentation on Self-Driving Marketing Campaigns at the MarTech conference last week. Most of the content followed the arguments I made here a couple of weeks ago, about the challenges of coordinating multiple specialist AI systems. But prepping for the conference led me to refine my thoughts, so there are a couple of points I think are worth revisiting.The first is the distinction between replacing human specialists with AI specialists, and replacing human managers with AI managers. Visually, the first progression looks like this as AI gradually takes over specialized tasks in the marketing department:The insight here is that while each machine presumably does its job much better than the human it replaces,* the output of the team as a whole can’t fundamentally change because of the bottleneck created by the human manager overseeing the process. That is, work is still organized into campaigns that deal with customer segments because the human manager needs to think in those terms. It’s true that the segments will keep getting smaller, the content within each segment more personalized, and more tests will yield faster learning. But the human manager can only make a relatively small number of decisions about what the robots should do, and that puts severe limits on how complicated the marketing process can become.The really big change happens when that human manager herself is replaced by a robot:Now, the manager can also deal with more-or-less infinite complexity. This means we no longer need campaigns and segments and can truly orchestrate treatments for each customer as an individual. In theory, the robot manager could order her robot assistants to create custom messages and offers in each situation, based on the current context and past behaviors of the individual human involved. In essence, each customer has a personal robot following her around, figuring out what’s best for her alone, and then calling on the other robots to make it happen. Whether that's a paradise or nightmare is beyond the scope of this discussion.In my post a few weeks ago, I was very skeptical that manager robots would be able to coordinate the specialist systems any time soon.  That now strikes me as less of a barrier.  Among other reasons, I’ve seen vendors including Jivox and RevJet introduce systems that integrate large portions of the content creation and delivery workflows, potentially or actually coordinating the efforts of multiple AI agents within the process. I also had an interesting chat with the folks at Albert.ai, who have addressed some of the knottier problems about coordinating the entire campaign process. These vendors are still working with campaigns, not individual-level journey orchestration. But they are definitely showing progress.As I've become less concerned about the challenges of robot communication, I've grown more concerned about robots making the right decisions.  In other words, the manager robot needs a way to choose what the specialist robots will work on so they are doing the most productive tasks. The choices must be based on estimating the value of different options.  Creating such estimates is the job of revenue attribution.  So it turns out that accurate attribution is a critical requirement for AI-based orchestration.That’s an important insight.  All marketers acknowledge that attribution is important but most have focused their attention on other tasks in recent years.  Even vendors that do attribution often limit themselves to assigning user-selected fractions of value to different channels or touches, replacing the obviously-incorrect first- and last-touch models with less-obviously-but-still-incorrect models such as “U-shaped”, “W-shaped”,  and “time decay”.  All these approaches are based on assu[...]



Customer Data Platforms Spread Their Wings

2017-09-28T21:39:45.343-04:00

I escaped from my cave this week to present at two conferences: the first-ever “Customer Data Platform Summit” hosted by AgilOne in Los Angeles, preceding Shop.org, and the Technology for Marketing conference in London, where BlueVenn sponsored me. I listened as much as could along the way to find what’s new with the vendors and their clients. There were some interesting developments. Broader awareness of CDP. The AgilOne event was invitation-only while the London presentation was open to any conference attendee, although BlueVenn did personally invite companies it wanted to attend. Both sets of listeners were already aware of CDPs, which isn’t something I’d expect to have seen a year or two ago. Both also had a reasonable notion of what a CDP does. But they still seemed to need help distinguishing CDPs from other types of systems, so we still have plenty more work to do in educating the market. Use of CDPs beyond marketing. People in both cities described CDPs being bought and used throughout client organizations, sometimes after marketing was the original purchaser and sometimes as a corporate project from the start. That was always a potential but it’s delightful to hear about it actually happening. The widely a CDP is used in a company, the more value the buyer gets – and the more benefit to the company’s customers. So hooray for that.CDPs in vertical markets. The AgilOne audience were all retailers, not surprisingly given AgilOne’s focus and the relation of the event to Shop.org. But I heard in London about CDPs in financial services, publishing, telecommunications, and several other industries where CDP hasn’t previously been used much. More evidence of the broader awareness and the widespread need for the solution that CDP provides.CDP for attribution. While in London I also stopped by the office of Fospha, another CDP vendor which has just become a Sponsor of the CDP Institute. They are unusual in having a focus on multi-touch attribution, something we’ve seen in a couple other CDPs but definitely less common than campaign management or personalization. That caught my attention because I just finished an analysis of artificial intelligence in journey orchestration, in which one major conclusion was that multi-touch attribution will be a key enabling technology. That needs a blog post of its own to explain, but the basic reason is AI needs attribution (specifically, estimating the incremental value of each marketing action) as a goal to optimize against when it's comparing investments in different marketing tasks  (content, media, segmentation, product, etc.)If there's a common thread here, it's that CDPs are spreading beyond their initial buyers and applications.  I’ll be presenting next week at yet another CDP-focused event, this one sponsored by BlueConic in advance of the Boston Martech Conference. Who knows what new things we'll see there?[...]



Vizury Combines Web Page Personalization with a Customer Data Platform

2017-09-16T11:13:32.295-04:00

One of the fascinating things about tracking Customer Data Platforms is the great variety among the vendors. It’s true that variety causes confusion for buyers. The CDP Institute is working to ease that pain, most recently with a blog discussion you’re welcome to join here.  But for me personally, it’s been endlessly intriguing to trace the paths that vendors have followed to become CDPs and learn where they plan to go next.Take Vizury, a Bangalore-based company that started eight years ago as an retargeting ad bidding platform. That grew into a successful business with more than 200 employees, 400 clients in 40 countries, and $30 million in funding. As it developed, the company expanded its product and, in 2015, released its current flagship, Vizury Engage, an omnichannel personalization system sold primarily to banks and insurance companies. Engage now has more than a dozen enterprise clients in Asia, expects to double that roster in the next six months, and is testing the waters in the U.S. As often happens, Vizury’s configuration reflects its origins. In their case, the most obvious impact is on the scope of the system, which includes sophisticated Web page personalization – something very rare in the CDP world at large. In a typical implementation, Vizury builds the client’s Web site home page.  That gives it complete control of how each visitor is handled. The system doesn't take over the rest of the client's Web site, although it can inject personalized messages on those pages through embedded tags.In both situations, Vizury is identifying known visitors by reading a hashed (i.e., disguised) customer ID it has placed on the visitor’s browser cookie. When a visitor enters the site, a Vizury tag sends the hased ID to the Vizury server, which looks up the customer, retrieves a personalized message, and sends it back to the browser.  The messages are built by templates which can include variables such as first name and calculated values such as a credit limit.  Customer-specific versions may be pregenerated to speed response; these are updated as new data is received about each customer. It takes ten to fifteen seconds for new information to make its way through the system and be reflected in output seen by the visitor.Message templates are embedded in what Vizury calls an engagement, which is associated with a segment definition and can include versions of the same message for different channels. One intriguing strength of Vizury is machine-learning-based propensity models that determine each customer’s preferred channel. This lets Vizury send outbound messages through the customer’s preferred channel when there’s a choice. Outbound options include email, SMS, Facebook ads, and programmatic display ads. These can be sent on a fixed schedule or be triggered when the customer enters or leaves a segment. Bids for Facebook and display ads can be managed by Vizury’s own bidding engine, another vestige of its origins. Inbound options include on-site and browser push messages.If a Web visitor is eligible for multiple messages, Vizury currently just picks one at random. The vendor is working an automated optimization system that will pick the best message for each customer instead. There’s no way to embed a sequence of different messages within a given engagement, although segment definitions could push customers from one engagement to the next. Users do have the ability to specify how often a customer will be sent the same message, block messages the customer has already responded to, and limit how many total messages a customer receives during a time period.What makes Vizury a CDP is that it builds and exposes a unified, persistent customer database. This collects data through Vizury's ow[...]



B2B Marketers Are Buying Customer Data Platforms. Here's Why.

2017-09-10T10:22:13.093-04:00

I’m currently drafting a paper on use of Customer Data Platforms by B2B SaaS marketers.  The topic is more intriguing than it sounds because it raises the dual questions of  why CDPs haven’t previously been used much by B2B SaaS companies and what's changed.  To build some suspense, let’s first review who else has been buying CDPs.We can skip over the first 3.8 billion years of life on earth, when the answer is no one. When true CDPs first emerged from the primordial ooze, their buyers were concentrated among B2C retailers. That’s not surprising, since retailers have always been among the data-driven marketers. They’re the R in BRAT (Banks, Retailers, Airlines, Telcos), the mnemonic I’ve long used to describe the core data-driven industries*. What's more surprising is that the B's, A's, and T's weren't also early CDP users.  I think the reason is that banks, airlines, and telcos all capture their customers’ names as part of their normal operations. This means they’ve always had customer data available and thus been able to build extensive customer databases without a CDP.By contrast, offline retailers must work hard to get customer names and tie them to transactions, using indirect tools such as credit cards and loyalty programs. This means their customer data management has been less mature and more fragmented. (Online retailers do capture customer names and transactions operationally.  And, while I don’t have firm data, my impression is that online-only retailers have been slower to buy CDPs than their multi-channel cousins. If so, they're the exception that proves the rule.)Over the past year or two, as CDPs have moved beyond the early adopter stage, more BATs have in fact started to buy CDPs.  As a further sign of industry maturity, we’re now starting to see CDPs that specialize in those industries. Emergence of such vertical systems is normal: it happens when demand grows in new segments because the basic concepts of a category are widely understand.  Specialization gives new entrants as a way to sell successfully against established leaders.  Sure enough, we're also seeing new CDPs with other types of specialties, such as products from regional markets (France, India, and Australia have each produced several) and for small and mid-size organizations (not happening much so far, but there are hints).And, of course, the CDP industry has always been characterized by an unusually broad range of product configurations, from systems that only build the central database to systems that provide a database, analytics, and message selection; that's another type of specialization.  I recently proposed a way to classify CDPs by function on the CDP Institute blog.**  B2B is another vertical. B2B marketers have definitely been slow to pick up on CDPs, which may seem surprising given their frenzied adoption of other martech. I’d again explain this in part by the state of the existing customer data: the more advanced B2B marketers (who are the most likely CDP buyers) nearly all have a marketing automation system in place. The marketers' initial assumption would be that marketing automation can assemble a unified customer database, making them uninterested in exploring a separate CDP.  Eventually they'd discover that nearly all B2B marketing automation systems are very limited in their data management capabilities.  That’s happening now in many cases – and, sure enough, we’re now seeing more interest among B2B marketers in CDPs.But there's another reason B2B marketers have been uncharacteristically slow adopters when it comes to CDPs.  B2B marketers have traditionally focused on acquiring new leads, leaving the rest of the customer[...]



AgilOne Adds New Flexibility to An Already-Powerful Customer Data Platform

2017-08-31T13:04:02.366-04:00


It’s more than four years since my original review of AgilOne, a pioneering Customer Data Platform. As you might imagine, the system has evolved quite a bit since then. In fact, the core data management portions have been entirely rebuilt, replacing the original fixed data model with a fully configurable model that lets the system easily adapt to each customer.

The new version uses a bouquet of colorfully-named big data technologies (Kafka, Parquet, Impala, Spark, Elastic Search, etc.) to support streaming inputs, machine learning, real time queries, ad hoc analytics, SQL access, and other things that don’t come naturally to Hadoop. It also runs on distributed processors that allow fast scaling to meet peak demands. That’s especially important to AgilOne since most of its clients are retailers whose business can spike sharply on days like Black Friday.

In other ways, though, AgilOne is still similar to the system I reviewed in 2013. It still provides sophisticated data quality, postal processing, and name/address matching, which are often missing in CDPs designed primarily for online data. It still has more than 300 predefined attributes for specialized analytics and processing, although the system can function without them. It still includes predictive models and provides a powerful query builder to create audience segments. Campaigns are still designed to deliver one message, such as an email, although users could define campaigns with related audiences to deliver a sequence of messages. There’s still a “Customer360” screen to display detailed information about individual customers, including full interaction history.

But there’s plenty new as well. There are more connectors to data sources, a new interface to let users add custom fields and calculations for themselves, and workflow diagrams to manage data processing flows. Personalization has been enhanced and the system exposes message-related data elements including product recommendations and the last products browsed, purchased, and abandoned. AgilOne now supports Web, mobile, and social channels and offers more options for email delivery. A/b tests have been added while analytics and reporting have been enhanced.

What should be clear is that AgilOne has an exceptionally broad (and deep) set of features. This puts it at one end of the spectrum of Customer Data Platforms. At the other end are CDPs that build a unified, sharable customer database and do nothing else. In between are CDPs that offer some subset of what AgilOne offers: advanced identity management, offline data support, predictive analytics, segmentation, multi-channel campaigns, real time interactions, advanced analytics, and high scalability. This variety is good for buyers, since it means there’s a better chance they can find a system that matches their needs. But it’s also confusing, especially for buyers who are just learning about CDPs and don’t realize how much they can differ. That confusion is something we’re worrying about a lot at the CDP Institute right now. If you have ideas for how to deal with it, let me know.(image)



Self-Driving Marketing Campaigns: Possible But Not Easy

2017-08-25T20:18:54.133-04:00

A recent Forrester study found that most marketers expect artificial intelligence to take over the more routine parts of their jobs, allowing them to focus on creative and strategic work.That’s been my attitude as well. More precisely, I see AI enabling marketers to provide the highly tailored experiences that customers now demand. Without AI, it would be impossible to make the number of decisions necessary to do this. In short, complexity is the problem, AI is the solution, and we all get Friday afternoons off. Happy ending.But maybe it's not so simple. Here’s the thing: we all know that AI works because it can learn from data. That lets it make the best choice in each situation, taking into account many more factors than humans can build into conventional decision rules. We also all know that machines can automatically adjust their choices as they learn from new data, allowing them to continuously adapt to new situations.Anyone who's dug a bit deeper knows two more things:self-adjustment only works in circumstances similar to the initial training conditions. AI systems don’t know what to do when they’re faced with something totally unexpected. Smart developers build their systems to recognize such situations, alert human supervisors, and fail gracefully by taking an action that is likely to be safe. (This isn’t as easy as it sounds: a self-driving car shouldn’t stop in the middle of an intersection when it gets confused.)AI systems of today and the near future are specialists. Each is trained to do a specific task like play chess, look for cancer in an X-ray, or bid on display ads. This means that something like a marketing campaign, which involves many specialized tasks, will require cooperation of many AIs. That’s not new: most marketing work today is done by human specialists, who also need to cooperate. But while cooperation comes naturally to (most) humans, it needs to be purposely added as a skill to an AI.*By itself, this more nuanced picture isn’t especially problematic. Yes, marketers will need multiple AIs and those AIs will need to cooperate. Maintaining that cooperation will be work but presumably can itself eventually be managed by yet another specialized AI. But let’s put that picture in a larger context.The dominant feature of today’s business environment is accelerating change. AI itself is part of that change but there are other forces at play: notably, the “personal network effect” that drives companies like Facebook, Google, and Amazon to hoard increasing amounts of data about individual consumers. These forces will impose radical change on marketers’ relations with customers. And radical change is exactly what the marketers’ AI systems will be unable to handle. So now we have a problem. It’s easy – and fun – to envision a complex collection of AI-driven components collaborating to create fully automated, perfectly personalized customer experiences. But that system will be prone to frequent failures as one or another component finds itself facing conditions it wasn’t trained to handle. If the systems are well designed (and we’re lucky), the components will shut themselves down when that happens. If we’re not so lucky, they’ll keep running and return increasingly inappropriate results. Yikes.Where do we go from here? One conclusion would be that there’s a practical limit to how much of the marketing process can really be taken over by AI. Some people might find that comforting, at least for job security. Others would be sad.A more positive conclusion is it’s still possible to build a completely AI-driven marketing process but it’s going to be harder than we thought. We’ll[...]



Treasure Data Offers An Easy-to-Deploy Customer Data Platform

2017-08-20T17:42:56.637-04:00

One of my favorite objections from potential buyers of Customer Data Platforms is that CDPs are simply “too good to be true”.   It’s a reasonable response from people who hear CDP vendors say they can quickly build a unified customer database but have seen many similar-seeming projects fail in the past.  I like the objection because I can so easily refute it by pointing to real-world case histories where CDPs have actually delivered on their promise.One of the vendors I have in mind when I’m referring to those histories is Treasure Data. They’ve posted several case studies on the CDP Institute Library, including one where data was available within one month and another where it was ready in two hours.  Your mileage may vary, of course, but these cases illustrate the core CDP advantage of using preassembled components to ingest, organize, access, and analyze data. Without that preassembly, accessing just one source can take days, weeks, or even months to complete.Even in the context of other CDP systems, Treasure Data stands out for its ability to connect with massive data sources quickly. The key is a proprietary data format that lets access new data sources with little explicit mapping: in slightly more technical terms, Treasure Data uses a columnar data structure where new attributes automatically appear as new columns. It also helps that the system runs on Amazon S3, so little time is spent setting up new clients or adding resources as existing clients grow. Treasure Data ingests data using open source connectors Fluentd for streaming inputs and embulk  for batch transfers. It provides deterministic and probabilistic identity matching, integrated machine learning, always-on encryption, and precise control over which users can access which pieces of data. One caveat is there’s no user interface to manage this sort of processing: users basically write scripts and query statements. Treasure Data is working on a user interface to make this easier and to support complex workflows.Data loaded into Treasure Data can be accessed through an integrated reporting tool and an interface that shows the set of events associated with a customer.  But most users will rely on prebuilt connectors for Python, R, Tableau, and Power BI.  Other SQL access is available using Hive, Presto and ODBC. While there’s no user interface for creating audiences, Treasure Data does provide the functions needed to assign customers to segments and then push those segments to email, Facebook, or Google. It also has an API that lets external systems retrieve the list of all segments associated with a single customer.   Treasure Data clearly isn’t an all-in-one solution for customer data management.  But organizations with the necessary technical skills and systems can find it hugely increases the productivity of their resources.  The company was founded in 2011 and now has over 250 clients, about half from the data-intensive worlds of games, ecommerce, and ad tech. Annual cost starts around $100,000 per year.  The actual pricing models vary with the situation but are usually based on either the number of customer profiles being managed or total resource consumption.[...]



Blueshift CDP Adds Advanced Features

2017-07-17T17:49:03.880-04:00

I reviewed Blueshift in June 2015, when the product had been in-market for just a few months and had a handful of large clients. Since then they’ve added many new features and grown to about 50 customers. So let’s do a quick update.Basically, the system is still what it was: a Customer Data Platform that includes predictive modeling, content creation, and multi-step campaigns. Customer data can be acquired through the vendor’s own Javascript tags, mobile SDK (new since 2015), API connectors, or file imports. Blueshift also has collection connectors for Segment, Ensighten, mParticle, and Tealium. Product data can load through file imports, a standard API, or a direct connector to DemandWare.As before, Blueshift can ingest, store and index pretty much any data with no advance modeling, using JSON, MongoDB, Postgres, and Kafka. Users do have to tell source systems what information to send and map inputs to standard entities such as customer name, product ID, or interaction type. There is some new advanced automation, such as tying related events to a transaction ID. The system’s ability to load and expose imported data in near-real-time remains impressive. Blueshift will stitch together customer identities using multiple identifiers and can convert anonymous to known profiles without losing any history. Profiles are automatically enhanced with product affinities and scores for purchase intent, engagement, and retention. The system had automated predictive modeling when I first reviewed it, but has now added machine- learning-based product recommendations. In fact, it recommendations are exceptionally sophisticated. Features include a wide range of rule- and model-based recommendation methods, an option for users to create custom recommendation types, and multi-product recommendation blocks that mix recommendations based on different rules. For example, the system can first pick a primary recommendation and then recommend products related to it. To check that the system is working as expected, users can preview recommendations for specified segments or individuals. The segment builder in Blueshift doesn’t seem to have changed much since my last review: users select data categories, elements, and values used to include or exclude segment members. The system still shows the counts for how many segment members are addressable via email, display ads, push, and SMS. On the other hand, the campaign builder has expanded significantly. The previous form-based campaign builder has been replaced by a visual interface that allows branching sequences of events and different treatments within each event.  These treatments include thumbnails of campaign creative and can be in different channels. That's special because many vendors still limit campaigns to a single channel. Campaigns can be triggered by events, run on fixed schedules, or executed once. Each treatment within an event has its own selection conditions, which can incorporate any data type: previous behaviors, model scores, preferred communications channels, and so on. Customers are tested against the treatment conditions in sequence and assigned to the first treatment they match. Content builders let users create templates for email, display ads, push messages, and SMS messages. This is another relatively rare feature. Templates can include personalized offers based on predictive models or recommendations. The system can run split tests of content or recommendation methods. Attribution reports can now include custom goals, which lets users measure different campaigns against different objectives.Blueshift still relies on external se[...]



Lexer Customer Data Platform Grows from Social Listening Roots

2017-07-10T07:43:01.017-04:00

Customer Data Platform vendors come from many places, geographically and functionally. Lexer is unusual in both ways, having started in Australia as a social media listening platform. About two years ago the company refocused on building customer profiles with data from all sources. It quickly added clients among many of Australia’s largest consumer-facing brands including Qantas airlines and Westpac bank.

Social media is still a major focus for Lexer. The system gathers data from Facebook and Instagram public pages and from the Twitter follower lists of clients’ brands. It analyzes posts and follows to understand consumer interests, assigning people to “tribes” such as “beach lifestyle” and personas such as “sports and fitness”.  It supplements the social inputs with information from third party data sources, location history, and a clients’ own email, Web site, customer service, mobile apps, surveys, point of sale, and other systems. Matching is strictly deterministic, although links based on different matches can be chained together to unify identities across channels.  The system can also use third party data to add connections it can’t be made directly.

Lexer ingests data in near-real-time, making social media posts available to users within about five minutes. It can react to new data by moving customers into different tribes or personas and can send lists of those customers to external systems for targeting in social, email, or other channels.  There are standard integrations with Facebook, Twitter, and Google Adwords advertising campaigns. External systems can also use an API to read the Lexer data, which is stored in Amazon Elastic Search.

Unusally for a CDP, Lexer also provides a social engagement system that lets service agents engage directly with customers. This system displays the customer’s profile including a detailed interaction history and group memberships. Segment visualization is unusually colorful and attractive.

Lexer has about forty clients, nearly all in Australia. It is just entering the U.S. market and hasn’t set U.S. prices.(image)



The Personal Network Effect Makes Walled Gardens Stronger, But There's Still Hope

2017-12-01T14:43:31.519-05:00

I’m still chewing over the role of “walled garden” vendors including Google, Amazon, and Facebook, and in particular how most observers – especially in the general media – fail to grasp how those firms differ from traditional monopolists. As it happens, I’m also preparing a speech for later this month that will touch on the topic, which means I’ve spent many hours working on slides to communicate the relevant concepts. Since just a handful of people will see the slides in person, I figured I’d share them here as well.In pondering the relation of the walled garden vendors to the rest of us, I’ve come to realize there are two primary dynamics at work. The first is the “personal network effect” that I’ve described previously. The fundamental notion is that companies get exponentially increasing value as they capture more types of information about a consumer. For example, it’s useful to know what’s on someone’s calendar and it’s useful to have a mapping app that captures their locations. But if the same company controls both those apps, it can connect them to provide a new service such as automatically mapping out the day’s travel route.  Maybe you even add helpful suggestions for where to stop for fuel or lunch. In network terms, you can think of each application as a node with a value of its own and each connection between nodes having a separate additional value. Since the number of connections increases faster than the number of nodes, there’s a sharp rise in value each time a new node is added. The more nodes you own already, the greater the increase: so companies that own several nodes can afford to pay more for a new node than companies that own just one node. This makes it tough for new companies to break into a customer’s life. It also makes it tough for customers to break away from their dominant network provider.My best visualization of this is to show the applications surrounding an individual and to draw lines showing how many more connections appear when you add nodes.  If it looks like the customer is trapped by those lines, well, yes.The point that’s missing from the discussions I’ve seen about walled gardens is that personal networks create a monopoly on the individual level. Different companies can coexist as the dominant networks for different people.  So let’s assume that Google, Facebook, Amazon, and Apple each manage to capture one quarter of the population in their own network. If each member spends 100% of her money through her network owner, the over-all market share of each firm would be just 25%. From a classical viewpoint, that’s a highly competitive market. But each consumer is actually at the mercy of a monopolist.  (If you want a real-life example, consider airline hub-and-spoke route maps.  Each airline has an effective monopoly in its hub cities, even though no airline has an over-all monopoly.  It took regulators a long time to figure that one out, too.)   In theory the consumer could switch to a new network. But switching costs are very high, since you have to train the new network to know as much about you as the old network. And switching to a new network just means you’re changing monopolists.  Remember that the personal network effect makes it really inconvenient to have more than one primary network provider.The second dynamic is the competition among network providers to attract new customers. As with any network, personal networks hold a big first mover advantage: whichever provider first sells several apps to the sam[...]



Amazon Buys Whole Foods: It's Not About Groceries

2017-06-21T15:16:43.331-04:00

Most of the comments I’ve seen about Amazon’s acquisition of Whole Foods have described it as Amazon (a) expanding into a new industry (b) continuing to disrupt conventional retail and (c) moving more commerce from offline to online channels. Those are all true, I suppose, but I felt they missed the real story: this is another step in Amazon building a self-contained universe that its customers never have to leave.That sounds a bit more paranoid than it should. This has nothing to do with Amazon being evil. It’s just that I see the over-arching story of the current economy as creation of closed universes by Amazon, Facebook, Google, Apple, and maybe a couple of others. The owners of those universes control the information their occupants receive, and, through that, control what they buy, who they meet, and ultimately what they think. The main players all realize this and are quite consciously competing with each other to expand the scope of their services so consumers have less reason to look outside of their borders. So Amazon buys a grocery chain to give its customers one less reason to visit a retail store (because Amazon’s long-term goal is surely for customers to order online for same-day delivery). And, hedging its bets a bit, Amazon also wants to control the physical environment if customers do make a visit. I’ve written about this trend many times before, but still haven’t seen much on the topic from other observers. This puzzles me a bit because it’s such an obviously powerful force with such profound implications. Indeed, a great deal of what we worry about in the near future will become irrelevant if things unfold as I expect.Let me step back and give a summary of my analysis. The starting point is that people increasingly interact with the world through their online identities in general and their mobile phones in particular. The second point is a handful of companies control an increasing portion of consumers’ experiences through those devices: this is Facebook taking most of their screen time, Google or Apple owning the physical device and primary user interface, and Amazon managing most of their purchases. At present, Facebook, Apple, Google, and Amazon still occupy largely separate spheres, so most people live in more than one universe. But each of the major players is entering the turf of the others. Facebook and Google compete to provide information via social and search. Both offer buying services that compete with Amazon. Amazon and Apple are using voice appliances to intercept queries that would otherwise go through to the others. Each vendor’s goal is to expand the range of services it provides. This sets up a virtuous cycle where consumers find it’s increasingly convenient to do everything through one vendor. Instead of a conventional “social network effect” where the value of a network grows with the number of users, this is a “personal network effect” where the value of a vendor relationship grows with the number of services the vendor provides to the same individual. While a social network effect pulls everyone onto a single universal network, the personal network effect allows different individuals to congregate in separate networks. That means the different network universes can thrive side by side, competing at the margins for new members while making it very difficult for members to switch from one network to the other. There’s still some value to network scale, however. Bigger networks will be able to create more appealing services and attract more par[...]



Cheetah Digital Debuts in Las Vegas

2017-06-12T21:04:57.621-04:00

I spent the latter part of last week still in Las Vegas, switching to the client conference for Cheetah Digital, the newly-renamed spinoff of Experian’s Cross Channel Marketing division. Mercifully, this was at a relatively humane venue, the big advantage being I could get from my hotel room to the conference sessions without walking through the casino floor or a massive shopping mall. But it was still definitely Vegas.The conference offered a mix of continuity and change. Nearly every client and employee I met had been with Cheetah / Experian for at least several years, so there was a definite feeling of old friends reconnecting. Less pleasantly, Cheetah’s systems have also been largely unchanged for years, something that company leaders could admit openly since they are now free to make new investments. Change was provided by the company’s new name and ownership: the main investor is now Vector Capital, whose other prominent martech investments include Sizmek, Emarsys, and Meltwater. There’s also some participation from ExactTarget co-founder Peter McCormick and Experian itself, which retained 25% ownership. The Cheetah Digital name reflects the company’s origins as CheetahMail, which Experian bought in 2004 and later renamed, although many people never stopped calling it Cheetah.Looking ahead, newly-named Cheetah CEO Sameer Kazi, another ExactTarget veteran, said the company’s immediate priorities are to consolidate and modernize its technology. In particular, they want to move all clients from the original CheetahMail platform to Marketing Suite, which was launched in 2014. Marketing Suite is based on the Conversen, a cross-channel messaging system that Experian acquired in 2012. Kazi said about one third of the company’s revenue already comes from Marketing Suite and that the migration from the old platform will take four or five years to complete.Longer term, Kazi said Cheetah’s goal is to become the world’s leading independent marketing technology company, distinguishing Cheetah from systems that are part of larger enterprise platforms. Part of the technical strategy to do this is to separate business logic from applications, using APIs to connect the two layers. This will make it easier for marketers to integrate external systems, taking advantage of industry innovation without requiring Cheetah to extend its own products. Cheetah will also continue to provide services and build customer databases for its clients. Products based on third party data, such as credit information and identity management, have remained with the old Experian organization. With $300 million in revenue and 1,600 employees, Cheetah Digital is already one of the largest martech companies. It is also one of the few that can handle enterprise-scale email. This makes it uniquely appealing to companies that are uncomfortable with the big marketing cloud vendors. The company still faces a major challenge in upgrading its technology to optimize customer treatments in real time across inbound as well as outbound channels.  It's a roll of the dice.[...]



Pega Does Vegas

2017-06-07T13:36:33.380-04:00

I spent the first part of this week at Pegasystems’ PegaWorld conference in Las Vegas, a place which totally creeps me out.* Ironically or appropriately, Las Vegas’ skill at profit-optimized people-herding is exactly what Pega offers its own clients, if in a more genteel fashion. Pega sells software that improves the efficiency of company operations such as claims processing and customer service. It places a strong emphasis on meeting customer needs, both through predictive analytics to anticipate what each person wants and through interfaces that make service agents’ jobs easier. The conference highlighted Pega and Pega clients’ achievements in both areas. Although Pega also offers some conventional marketing systems, they were not a major focus. In fact, while conference materials included a press release announcing a new Paid Media solution, I don’t recall it being mentioned on the main stage.**What we did hear about was artificial intelligence. Pega founder and CEO Alan Trefler opened with a blast of criticism of other companies’ over-hyping of AI but wasn’t shy about promoting his own company’s “real” AI achievements. These include varying types of machine learning, recommendations, natural language processing, and, of course, chatbots. The key point was that Pega integrates its bots with all of a company’s systems, hiding much of the complexity in assembling and using information from both customers and workers. In Pega’s view, this distinguishes their approach from firms that deploy scores of disconnected bots to do individual tasks. Pega Vice President for Decision Management and Analytics Rob Walker gave a separate keynote that addressed fears of AI hurting humans. He didn’t fully reject the possibility, but made clear that Pega’s official position is it’s adequate to let users understand what an AI is doing and then choose whether to accept its recommendations. Trefler reinforced the point in a subsequent press briefing, arguing that Pega has no reason to limit how clients can use AI or to warn them when something could be illegal, unethical, dangerous, or just plain stupid. Apart from AI, there was an interesting stream of discussion at the conference about “robotic process automation”. This doesn’t come up much in the world of marketing technology, which is where I mostly live outside of Vegas. But apparently it’s a huge thing in customer service, where agents often have to toggle among many systems to get tasks done. RPA, as its known to its friends, is basically a stored series of keystrokes, which in simpler times was called a macro. But it’s managed centrally and runs across systems. We heard amazing tales of the effort saved by RPA, which doesn’t require changes to existing systems and is therefore very easy to deploy. But, as one roundtable participant pointed out, companies still need change management to ensure workers take advantage of it.Beyond the keynotes, the conference featured several customer stories. Coca Cola and General Motors both presented visions of a connected future where soda machines and automobiles try to sell you things. Interesting but we’ve heard those stories before, if not necessarily from those firms. But Scotiabank gave an unusually detailed look at its in-process digital transformation project and Transavia airlines showed how it has connected customer, flight, and employee information to give everyone in the company a complete view of pretty much everything. This allow[...]



SessionM Expands from Loyalty to Full Customer Engagement Management

2017-11-01T11:29:16.229-04:00

SessionM launched in 2012 as a platform that increased user engagement by adding gamification and loyalty rewards to mobile apps. The system has since expanded to support more channels and message types. This puts it in competition with dozens of other customer engagement and personalization systems. Compared with these vendors, SessionM’s loyalty features are probably its most unusual feature.  But it would be misleading to pigeonhole SessionM as a system for loyalty marketers. Instead, consider it a personalized messaging* product that offers loyalty as a bonus option for marketers who need it.In that spirit, let’s break down SessionM’s capabilities by the usual categories of data, message selection, and delivery. Data: SessionM can gather customer behaviors on Web and mobile apps from its own tags or using feeds from standard Web analytics tools. It can also ingest data from other sources such as a Customer Data Platform or CRM system. Customer data is organized into profiles and events, which lets the system store nearly any type of information without a complex data model.  SessionM can also accommodate non-customer data such as lists of products and retail stores. It can apply multiple keys to link data related to the same customer, but requires exact matches. This works well when dealing with known customers, who usually identify themselves when they start using a sytem. Finding connections among records belonging to anonymous visitors would require additional types of matching.Message Selection: SessionM is organized around campaigns.  Each campaign has a target audience, goal (defined by a query), outcome (such as adding points to an account or tagging a customer profile), message, and “execution” (the channel-specific experience that includes the message). SessionM describes the outcome as primary and the message as following it: think of notification after you've earned an award. Non-loyalty marketers might think of the message as coming first with the outcome as secondary. In practice, the order doesn’t matter. What does matter is that campaigns can include multiple messages, each having its own selection rules. Message delivery can be scheduled or triggered by variables such as time, frequency, and customer behaviors. This means a SessionM campaign could deliver a sequence of messages over time, even though the system doesn’t have a multi-step campaign builder.  Rules can draw on machine learning models that predict content affinity, churn, lifetime value, near-time purchase, and engagement. Clients can use the standard models or tweak them to fit special needs. Automated product recommendations are due later this year.  Messages are built from templates that can include dynamic elements selected by rules or models. Delivery: Campaign messages are delivered through widgets installed in a Web page or mobile app, through lists sent to email providers or advertising Data Management Platforms (DMPs), or through API calls from other systems such as chatbots. Multiple campaigns can connect through the same widget, which raises the possibility of conflicts.  At present, users have to control this manually through campaign and message rules. SessionM is working on a governance module to manage campaign precedence and limit the total number of messages.The system can generate presentation-ready messages or send data elements for the delivery system to transform into the published for[...]



Coherent Path Auto-Optimizes Promotions for Long Term Value

2017-05-24T19:34:42.806-04:00

One of the grand challenges facing marketing technology today is having a computer find the best messages to send each customer over time, instead of making marketers schedule the messages in advance.  One roadblock has been that automated design requires predicting the long-term impact of each message: just selecting the message with the highest immediate value can reduce future income. This clearly requires optimizing against a metric like lifetime value. But that's really hard to predict.Coherent Path offers what may be a solution. Using advanced math that I won’t pretend to understand*, they identify offers that lead customers towards higher long-term values. In concrete terms, this often means cross-selling into product categories the customer hasn’t yet purchased.  While this isn’t a new tactic, Coherent Path improves it by identifying intermediary products (on the "path" to the target) that the customer is most likely to buy now.  It can also optimize other variables such as the time between messages, price discounts, and the balance between long- and short-term results Coherent Path clients usually start by optimizing their email programs, which offer a good mix of high volume and easy measurability. The approach is to define a promotion calendar, pick product themes for each promotion, and then select the best offers within each theme for each customer. “Themes” are important because they’re what Coherent Path calculates different customers might be interested in. The system relies on marketers to tell it what themes are associated with each product and message (that is, the system has no semantic analytics to do that automatically). But because Coherent Path can predict which customers might buy in which themes, it can suggest themes to include in future promotions.Lest this seem like the blackest of magic, rest assured that Coherent Path bases its decisions on data.  It starts with about two years’ of interactions for most clients, so it can see good sample of customers who have already completed a journey to high value status. Clients need at least several hundred products and preferably thousands. These products need to be grouped into categories so the system can find common patterns among the customer paths. Coherent Path automatically runs tests within promotions to further refine its ability to predict customer behaviors. Most clients also set aside a control group to compare Coherent Path results against customers managed outside the system. Coherent Path reports results such as 22% increase in email revenue and 10:1 return on investment – although of course your mileage may vary. The system can manage other channels than email. Coherent Path says most of its clients move on to display ads, which are also relatively easy to target and measure. Web site offers usually come next.Coherent Path was founded in 2012 and has been offering its current product for more than two years. Clients are mostly mid-size and large retailers, including Neiman Marcus, L.L. Bean, and Staples. Pricing starts around $10,000 per month._________________________________________________________________________* Download their marketing explanation here or read an academic discussion here. [...]



Dynamic Yield Offers Flexible Omni-Channel Personalization

2017-05-20T16:15:44.531-04:00

There are dozens of Web personalization tools available. All do roughly the same thing: look at data about a visitor, pick messages based on that data, and deploy those messages. So how do you tell them apart?The differences fall along several dimensions. These include what data is available, how messages are chosen, which channels are supported, and how the system is implemented. Let’s look at how Dynamic Yield stacks up.Data: Dynamic Yield can install its own Javascript tag to identify visitors and gather their information, or it can accept an API call with a visitor ID. It can also build profiles by ingesting data from email, CRM, mobile apps, or third party sources. It will stitch data together when the same personal identifier is used in different source systems, but it doesn’t do fuzzy or probabilistic cross-device matching. Data is ingested in real time, allowing the system to react to customer behaviors as they happen.Message selection: this is probably where personalization systems vary the most. Dynamic Yield largely relies on users to define selection rules. Specifically, users create “experiences” that usually relate to a single position on a Web page or single message in another channel.  Each experience has a list of associated promotions and each promotion has its own target audience, content, and related settings. When a visitor engages with an experience, the system finds the first promotion audience the visitor matches and delivers the related content. This is a pretty basic approach and doesn’t necessarily deliver the best message to visitors who qualify for several audiences. But dynamic content rules, machine-learning, and automated recommendations can improve results by tailoring the final message to each individual. In addition, the system can test different messages within each promotion and optimize the results against a user-specified goal.  This lets it send different messages to different segments within the audience. Product recommendations are especially powerful.  Dynamic Yield supports multiple recommendation rules, including similarity, bought together, most popular, user affinity, and recently viewed.  One experience can return multiple products, with different products selected by different rules.  In other words, the system present a combination of recommendations including some that are similar to the current product, some that are often purchased with it, and some that are most popular over all.  Channels: this is a particular strength for Dynamic Yield, which can personalize Web pages, emails, landing pages, mobile apps, mobile push, display ads, and offline channels. Most personalization options are available in most channels, although there are some exceptions: you can’t do multi-product recommendations within a display ad and system-hosted landing pages can’t include dynamic content. Implementation: this also varies by channel. Web site personalization is especially flexible: the Javascript tag can read an existing Web page and either replace it entirely or create a version with a Dynamic Yield object inserted, without changing the page code itself. Users who do control the page code can insert a call the Dynamic Yield API.  Email personalization can also be done by inserting an API call, which lets Dynamic Yield reselect the message each time the email is rendered. The system has direct in[...]



Will Privacy Regulations Favor Internet Giants?

2017-05-14T16:28:41.161-04:00

Last week’s MarTech Conference in San Francisco came and went in the usual blur of excellent presentations, interesting vendors, and private conversations. I’m sure each attendee had their own experience based on their particular interests. The two themes that appeared the most in my own world were:- data activation. This reflects recognition that customer data delivers most of its value when it is used to personalize customer treatments. In other words, it’s not enough to simply assemble a complete customer view and use it for analytics.  “Activation” means taking the next step of making the data available to use during customer interactions, ideally in real time and across all channels. It’s one of the advantages of a Customer Data Platform, which by definition makes unified customer data available to other systems. This is a big differentiator compared with conventional data warehouses, which are designed primarily to support analytical projects through batch updates and extracts.  Conventional data warehouse architectures load data into a separate structure called an “operational data store” when real-time access is needed. Many CDP systems use a similar technical approach but it’s part of the core design rather than an afterthought. This is part of the CDPs’ advantage of providing a packaged system rather than a set of components that users assemble for themselves. CDP vendors exhibiting at the show included Treasure Data, Tealium, and Lytics.- orchestration. This is creating a unified customer experience by coordinating contacts across all channels. It’s not a new goal but is standing out more clearly from approaches that manage just one channel. More precisely, orchestration requires a decision system that uses activated customer data to find best messages and then distributes them to customer-facing systems for delivery. Some Customer Data Platforms include orchestration features and others don’t; conversely, some orchestration systems are Customer Data Platforms and some are not. (Only orchestration systems that assemble a unified customer view and expose it to other systems qualify as CDPs.) Current frontiers for orchestration systems are journey orchestration, which is managing the entire customer experience as a single journey (rather than disconnected campaigns), and adaptive orchestration, which is using automated processes to find and deliver the optimal message content, timing, and channels for each customer. Orchestration vendors at the show included UserMind, Pointillist, Thunderhead, and Amplero.Of course, it wouldn’t be MarTech if the conference didn’t also provoke Deeper Thoughts. For me, the conference highlighted three long-term trends:- continued martech growth. The highlight of the opening keynote was unveiling of martech Uber-guru Scott Brinker’s latest industry landscape, which clocked in at 5,300 products compared with 3,500 the year before. You can read Brinker’s in-depth analysis here, so I’ll just say that industry growth shows no signs of slowing down.- primacy of data. Only a few presentations or vendors at the conference were devoted specifically to data, but nearly everything there depends on customer data in one way or another. And, as you know from my last blog post, the main story in customer data today is the increasing control exerted by Google and Facebook, and to a lesse[...]