Added By: Feedage Forager Feedage Grade B rated
Language: English
change  clicky  code  cookie  data  new  page  people  set  site  sites  things  time  tracking code  tracking  traffic 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics


Last Build Date: Sat, 24 Mar 2018 19:15:55 +0000


Infrastructure upgrades and migrations

Sun, 12 Jun 2016 12:43:35 -0800

The last 6 months, we have been extraordinarily busy with a major infrastructure upgrade that will make Clicky much faster and more resilient. This is by far the most major backend upgrade in our (almost) 10 year history. A lot of planning and testing has gone into this and we're excited to finally be under way with the database aspect. Our load balancers have already been on the new infrastructure for a month now, and web and tracking servers will follow up after the databases are done.

The database maintenance in late April was in prep for this migration, to make it as fast as possible. This weekend we will be doing 8 database servers - 6, 22, 28, 29, and 59-62. And each weekend thereafter we plan to do at least 20 more. We have 85 total database servers at the moment so we should be done or at least very close to it by the end of June.

Time-wise, migrations are similar to the database maintenance we do a few times a year. Most servers will be in the 4-10 hour range, but some of the bigger/oldest ones will take as long as 24 hours. As well, traffic processing unfortunately has to be halted during the migration. But these new servers scream and will catch back up to real time in no time at all once the migration is done.

We have gone out of our way to make this process as transparent as possible. When a db server is migrating, your affected sites' dashboards will show a message like this:


The message shows the time the migration started, and an ETA for when it will be done (both in your local timezone), based upon how many rows/second we know these go on average and how many rows total there are to migrate.

Once a server has been migrated, reports should load much faster than before. We are planning to increase the filtering/segmentation limits as well once this is all done, as those should have much less of an impact on real time processing.

We're also excited to be done with this because it means we can get back to the software side of things, which is our real passion.

Hope everyone has an amazing summer!

HTTP/2, sticky table headers, tracking code fixes

Fri, 15 Jan 2016 11:56:42 -0800

No posts for a few months so I figured I'd group a couple of things into a single post. HTTP/2Last summer we migrated to Nginx for load balancing and it's been fantastic. A few months later, Nginx released support for HTTP/2, which I've been salivating over ever since. We recently acquired some new servers that are much higher end than the ones we were using before. We migrated load balancing to these and took the opporunity to update Nginx to the latest and greatest while we were at it. This means we're now live on HTTP/2 across the board - our own load balancers, plus our CDN (Cloudflare). From some tests we ran, the only other analytics service (that we like to compare ourselves to) that supports HTTP/2 is Google Analytics. Sticky table headersWhen you scroll down a report, our navigation tabs have stuck to the top of the screen for quite a while now. But the table headers in most of the reports we have would disappear. Not a big deal before, but since we added all that yummy segments data to the reports last April, there's a lot of colums to look at. Today, we pushed up an update so the table headers will now stick to the bottom of the tabs when you scroll down, as you can see in the screenshot below. This was tricky, because we use tables for these particular reports. (Don't give us heck about using tables here, this is exactly what they were designed for and is pretty much the only place they're used on our site). You can't just add position:fixed to a table row element, so instead we worked out a solution where we clone the table and remove everything but the first row, and stick that to the top, right below the tabs. It works great! Tracking code queue encoding issues fixedWhen we released heatmaps in Oct 2012, we made a change to our tracking code to queue up certain types of beacons to make them more accurate (javascript events and goals) and more efficient (heatmaps). The queue is stored in a cookie in JSON format, which is URL encoded, then decoded when the cookie is read. The bug was that there was an extra URL decode in there, which badly broke JSON-parsing when a URL or title had quote characters in them. What made it difficult to find and fix was that other areas of the tracking code, I had inadvertently worked around this bug, thinking it was just a feature of browsers that they always decoded cookies when reading them. Alas no, it was just my own damn fault. This was a very uncommon bug to come across, but for the few people who emailed us about it over the years, it was really nasty. A fatal JSON parse would basically kill javascript entirely which could severely break a site with our code on it. Thankfully this is all fixed now. White label iframe auto-sizingThis is only of interest to our white label customers since, for security reasons, we don't allow Clicky itself to be included within an iframe - unless it's a white label. Why do we let white labels use iframes? Because it's simply the best way to integrate the analytics reports into a company's existing site, making it completely seamless. The problem is that you have to manually declare an iframe's width and height, and as someone clicks around pages, the actual height of each page is going to change. Since some of our reports can get pretty tall, the best solution was to just set a ridiculously large height on the iframe so that it would never have to scroll internally. Good news! Now we have added messaging support to pass a message to the parent document with the height of the iframe document as each new page loads. Full details are here. etcThere have been a ton of minor tweaks and bug fixes pushed up in the last 3 months as well. Sometimes there's a longer period of time between blog posts than there used to be. Don't worry, we're not slacking. We generally only post when there's a major new feature to talk about. There's lot of things going on behind the scenes at any given time that aren't necessarily of interest to our customers, but these things take away precious[...]

Filter out traffic based on country, organization, referrer, and more!

Wed, 21 Oct 2015 12:58:27 -0800

Some of the most requested features we get are to filter out all traffic from specific organizations, and filter out all traffic from specific countries (or only log traffic from specific countries, for hyper local sites). For the countries thing, we used to link people to an old blog post that showed how to do this, however it stopped working a year or two ago when Maxmind killed their javascript API.

Well, no fret! These types of filters, and more, are now officially supported directly from the new Visitor tags and filters site preference page.


These are all of the filters you can create from this new filter hub. And like the IP/UID filters of yore, every filter can either be site-specific, or apply globally to all sites in your account.

  • IP/UID - This was what you used to do from this page (and nothing else). Filter out traffic based on IP address or UID (tracking cookie).
  • Pages - Filter out page views that match specific patterns. This used to be part of the main site preferences page (hidden under advanced), but we wanted to centralize all filters, so it's here now, and with much better wildcard support than there used to be.
  • Query parameters - Enter in query parameters you want stripped out from URLs (the main purpose of this filter type, to help normalize your content report), with the option to also ignore those page views completely if they contain these query parameters.
  • Countries - Ignore all traffic from specific countries, or only log traffic from specific countries.
  • Referring domains - Full wildcard support with this filter type. Two options here. Either ignore all traffic that comes from specific domains, or, log it and attach the referrer to the visitor but DON'T log the referrer to the top referrer (and related) reports. This second option is kind of weird but everything exists for a reason.
  • Organizations - Filter out all traffic that we resolve to specific organizations/hostnames. Full wildcard support.

Clicky is the best service in the biz when it comes to filtering out robot activity and spam referrers. We are able to be so good at this because you guys let us know about new ones that pop up in your logs. So while this new filter system can be used to filter out bot/spam activity immediately for your own account, we still want you to let us know if any traffic in your logs is suspect, so we can update our global block list to benefit everyone. So don't let this make you lazy!

Nginx and A+ SSL

Sun, 04 Oct 2015 18:28:23 -0800

Back in June, we migrated to Nginx for load balancing, which has been fantastic. I've been wanting this for years and I was so happy when it finally happened.

We're using some older hardware we had lying around, the parts combined for each server cost barely $1,500 when originally purchased. Compare this to the $7,000 we paid for each of our old Kemp load balancers, not to mention the $1000+ support contracts per load balancer we needed for software upgrades and warranty exchanges (which we've had to do 3 times over the years. Of a total of 6 that we've owned, 3 of them have died and had to be replaced. 50% failure rate. Compared to the ~60 servers we've built over 9 years and only had 1 die, ever.) These sub-$2,000 machines can handle more than 3x the load of our $7,000 proprietary load balancers. 3x the performance for 1/4 the cost = 12x more for our money, plus no more support contracts.

Beyond expense and reliability issues, we also wanted to have really great SSL. The kind with SPDY HTTP/2 and an A+ rating. With Kemp, we had none of this.

The final step for an A+ was enabling HTTP Strict Transport Security, which took some consideration because there's really no going back. There were a number of things to double and triple check and some code to update, etc. But as of today, it's live. This feature tells a browser to always connect to a domain via HTTPS, even if a page/link tells it to do otherwise, ensuring all of your interactions with a web site are always secure.

Out of curiosity, I checked all of our competitors' SSL ratings. Only one of them has an A+, and surprisingly it's not Google. Otherwise it's a mix of A, B, and C. When we were on Kemp, we had a C, and now we're A+. From worst to first, oh yeah!








Webhooks and push notifications for alerts

Mon, 15 Jun 2015 20:18:59 -0800

We just pushed some new updates for our alerts system.

First, webhooks. Basically this means we'll ping a URL that you give us everytime that alert is triggered, and you can do what you want with the data we pass through to it. Full details on this are on the alert set up page.

A while back we added push notifications for Safari on OS X. More recently, Google Chrome added push notification support, including on Android, which is killer. We don't have an app so the thought of finally having mobile push notifications was exciting.

We wanted to add support for Chrome notifications pronto, but the security behind push notifications is a flippin' nightmare to say the least.

Then we discovered Roost, which handles all that for us. So we signed up and implemented that into our site, which was much easier than doing Safari manually was. Roost supports Safari OS X, Chrome, and Chrome on Android, and Firefox is coming soon. Two-thirds of you already access Clicky on Chrome or Safari, and when Firefox support comes out, we'll be looking at almost 90% coverage, so that's pretty great.

The only thing we don't like is that we weren't able to add audio support to these push notifications. Oh well!

So how do you set these up? On the alert set up page is a new checkbox for push notifications, along with a link to manage your push notification devices. Click that link to add your supported browser to your list of devices (if it's not supported, it will tell you). Then go back to the alert set up page and you can select which device(s) to push to for that alert.

Here's what the alerts look like in action. When you click on an alert, you will be taken to the visitor session on Clicky. Enjoy!



A few other things.

1) No iOS push notifications yet. Hopefully either Apple or Google will add them to the iOS versions of Safari or Chrome soon!

2) We killed the old Safari push notification system. That means you'll need to opt-in again on Safari to get alerts.

3) Push notifications work in the background, so even if your favorite browser isn't supported, you can still opt in using a supported browser to get push notifications on your device. When you click them though, it will open in the browser you opted in with (e.g. Chrome).

Clicky Supersize XL Extreme edition

Thu, 07 May 2015 15:08:58 -0800

I've been wanting to make Clicky take advantage of larger screens for a long time but there were a lot of challenges involved beyond just changing the width of the site container. (I know that sounds pretty lame but just trust me). But after last week's release of our new Segments feature, it was clear this needed to be the next major project. Segments are great, but they take up a lot of horizontal space which really crippled a few reports. For example, the Links (referrers) report: [Note: Click any of these screenshots to see a full sized version] That just killed me, having every referrer be basically unreadable beyond the domain name. Well, here's what that report looks like now: Much better, eh? Here's a summary of what's changed:Instead of a fixed width of 960px, the width is now dynamic with min/max boundaries of 960px and 1200px. We wanted to make sure everything still worked fine with the old width (e.g. this is important for some of our white label customers who have Clicky integrated via iframe), but let people with larger screens take advantage of them. I'm not really a fan of sites that go the full width of your browser window, as I feel the readability suffers on really large monitors. I tested various max widths and anything over 1200 just felt wrong. BUT... my opinion on that may change. Feel free to try and convince me. We now have the browser do string-shortening for us, instead of doing it on the backend. This always ensures that the text takes up the full width available, which was extremely important in order to support a dynamic width. Doing it on the backend has also always been a major issue for non-English character strings, as our old shortening code was not multibyte friendly, so I'm glad that's behind us now! This was the major challenge of this release. The basics of it are of course dead simple, but it's the small details that matter. The voodoo magic I had perform to get this to work in all the variety of scenarios I needed it felt like I was jamming a square peg through a round hole over and over (and over) until it finally fit. Spy looks gorgeous. We got a much higher-res map, combined with the wider screen, it's killer. (Screenshots below). We've also moved the controls (e.g. map zoom) to the top left corner so they're in a fixed position instead of moving around when the map size changes. There's not really any small text anywhere anymore. Now that we have so much more room, the font sizes (almost) everywhere now are more of a normal size, which makes things a lot more readable. The main site dashboard and the user homepage both need some work to take better advantage of all the extra space, but I didn't feel delaying this release was warranted just for that. I wanted to take some time to think up and test different ideas. For example one thing I'm considering is having an option for a third column on the dashboard. So hopefully we can get some updates out to those in the next few weeks. SPY VS SPY Here's a few screenshots comparing the old Spy to the new one. Click any of them to see full size! OLD SPY :( NEW SPY :D This was heavily tested with all 5 of the major browsers, and everything is cool, including all the way back to IE7. However there are bound to be a few bugs so please let us know if you experience any layout issues. Screenshots and details of your OS/browser version are of course appreciated.[...]

New feature: Segments!

Sun, 26 Apr 2015 21:21:06 -0800


It's been a while but the wait was worth it! (Yes, we're still here!)

Segments has been in demand FOR. EVER. and we could not be more excited to finally offer this to you. Think of it as the one-off segmentation we already offer, but integrated into nearly every report automatically and with almost no impact on speed. You can also sort the reports by any of these new columns. It's pretty nice!

(A very important note, that this data for your main Content report is vastly different than any of the others, but for good reason. This is fully explained here.)

One reason we avoided doing this for so long is that we didn't think we could make it happen without a drastic hit on performance. But a lot of creativity went into this on the backend and the results are great. One of the aspects of the great performance is data sampling for higher traffic sites. Don't worry, we only do it for high traffic sites and/or large date ranges. It's explained in full detail on the new Segments knowledgebase article.

If your site has more than 10,000 page views in the date/range being viewed, sampling takes effect on all data past the last 10,000 page views. By default it's 25% but Pro Plus and higher members (Upgrade) can choose a few other options, including 100% if you really want it. You can change this setting on your user preferences page, or using the new sampling menu, next to the trend menu, so you can change it quickly as needed as you hop between sites.


We plan to make this new feature available via the API as well, but first we want to give it a day or two and see the impact this on our resource usage once thousands of people are using it at the same time. Just playing it safe.

There are various other tweaks and small changes with this release, but probably only one that many will notice.

Our old family style reports, e.g. browsers grouped by family (Firefox, etc) had to go the way of the dinosaur. They were a really nasty hack and anytime we made global updates to reports like this, it was always a huge challenge to get those changes into those family style reports. We've never had an update to all reports as big as this one though, so I just had to declare bankruptcy on that code and move on.

But I felt good about it, because the backend API changes I had to make for this new data structure allowed a lot more flexibility so this worked out well and it's fast.

You can still view individual families. You'll just have to use the new menu system I built into these reports, as you can see in the screenshot below. This affects Browsers, OS', Hardware, Campaigns, Custom data -- I killed the search engines one since there were only a few categories anyways, and if you use split tests, those are also not grouped by family anymore because even people who do use that feature tend not to have more than one or two total.


That's all for now!

Change your trend preference on demand from any report

Wed, 03 Dec 2014 19:34:03 -0800

When we revamped and centralized our UI preferences in October, our goal was to simplify things, particularly for new users. There was one change that a good number of power users didn't like though, that change being that the same trend preference didn't always make sense for all of their sites and/or dashboards. Previously the trend was a dashboard preference, and each dashboard is tied to a specific site, so for some power users the old set up was better.

To address this complaint, we've added a new trend menu in the top right corner that allows you change your trend preference on demand.


When you change the trend option, it's saved to the database immediately so it sticks with you as you move to new reports and/or sites in your account.

I like this new method quite a bit, it's really nice to be able to change it on demand while viewing a report, to compare different trends quickly and easily. And having this menu front and center, more people are aware there's even a trend preference in the first place. (Even with the new set up in October, there were still not nearly enough people aware of this option).

We didn't have enough horizontal room to create a full menu button that would display the current option, hence the choice of using an icon. Hover over the icon to show the current setting. This particular aspect isn't necessarily ideal but it was the only design that we felt worked.

HTTPS for all

Tue, 18 Nov 2014 12:43:23 -0800

As HTTPS becomes more and more common for even just the simplest of blogs, we've decided to allow the tracking of HTTPS sites for all customers, including free customers and those on our grandfathered Starter plan. is now also available over a secured HTTPS connection for all customers. Previously we only offered this to paying customers. One of the main reasons for this was that our advertising company,, didn't support ads being displayed over HTTPS. We offset the cost of offering free service by the revenue we make from our ads, so we had no choice in this department. BSA just pushed out HTTPS support though, so now is available via HTTPS to all customers, paying or not.

We only force HTTPS on our web site when you're logged in. If you want to disable the full time HTTPS for any reason, you can do that in your user preferences.

Favorite sites

Tue, 21 Oct 2014 18:18:23 -0800

For those of you with multiple sites in your account, chances are there are a few sites that are lot more important to you than the rest. Now you can make sure these sites are always at the top of your user homepage for quick access, on both the desktop and mobile versions of Clicky.

Flagging a site as a favorite works just like the other reports. Click the star and presto, it's a favorite. Click it again and it's no longer a favorite. If you need a moment to pick your jaw up off the floor, please take it.

On the desktop there are options to sort your sites by something other than alphabetically, such as visitors or bounce rate. When you use favorites, the same sorting rules will still apply, but divided into the favorite and non-favorite sites. In these screenshots that sorting is default (alphabetical), so you can see that the favorites are at the top sorted by name, then the rest are below them also sorted by name.



New global user preferences for trends, graphing, and more

Tue, 14 Oct 2014 22:53:49 -0800

Preferences for trends, graphing, and a few other things have always been site-specific, but most customers want them to be the exact same for all of their sites. There's also the (large number of) people who have sub-user accounts who are at the mercy of the master account as to how these preferences are set.

We've gutted this mess and moved these preferences into the global user preferences page. Now every user has control over exactly how they want all of these settings to work. A screenshot of the new page is below.

The site preferences area is now all related to tracking preferences, which makes more sense. The dashboard preferences are now just for the layout of the modules, and the settings of each one (e.g. which tab is the default).

We also added a few new features that have been requested fairly often: A site preference to allow the logging of all bot activity, and a user preference for hiding the current hour in hourly graphs, to avoid panic when seeing a 'cliff' near the beginning of the hour. These are covered in more detail in this post.

Also released today were some new on-demand features for graphs, so be sure to check those out too and let us know what you think!

Here is what the new user preferences page looks like:


New preferences: Log bots, and hide the current hour in hourly graphs

Tue, 14 Oct 2014 22:51:16 -0800

As part of our massive preferences overhaul, we added two new preferences that have been requested fairly often.

Bot logging

This is a new site (tracking) preference. We highly recommend leaving this OFF - log analyzers are the best way to look at bot activity on your site - but there are specific use cases where this may be useful. For example, if Clicky is logging significantly less traffic than another tracker you are using, bots are a likely cause so enabling this will let you see if that's the case or not.

Here's where to enable this new option: (Again, we highly recommend leaving it OFF...)


It's important to note that this is no guarantee that all bots will get logged. We'll only log bot activity if they interact with Javascript (a lot of them do these days) or load our 'noscript' backup tracking method.

We'll try our best to identify Google, Microsoft, and Yahoo bots, which you'll see under the new 'Robot' section of the web browsers and operating systems reports. All other robots will just be labeled 'Robot'. Bots will also be logged as normal visitors if this is on, so you will see them in your visitors report and you can filter visitors with the new robot user agents.


Hide current hour in hourly graphs

Near the beginning of the hour, the number for that hour (visitors, actions, whatever) might be quite small compared to a full hour. If you view your reports near the beginning of the hour, you will see this as a 'cliff' and perhaps panic that your traffic suddenly died.

We highly suggest checking the current time before panicking in such situations, but we get it that some people would prefer never to be in that situation. So now there's a new user preference to always hide the current hour.


Also released today were some new on-demand features for graphs, so be sure to check those out too and let us know what you think!

New graphing features

Tue, 14 Oct 2014 22:47:29 -0800

Part of our massive preferences overhaul included chipping away some nasty cruft. One of those things was the option to allow simple HTML bar graphs to be the default graphing method. This was used by very few people so we removed it as an option to be the default graphing option.

But have no fear, we know this method is still useful even for people who never set it as their default, so we changed it to be an on-demand option that you can see in the top right corner of every graph. Whatever date range you're viewing in a graph, clicking the bar graph will show the same data, with one exception: no hourly data. Bar graphs never supported hourly data since they were created (and deprecated) well before we added support for hourly data. So they still don't do that. When viewing hourly data, if you view a bar graph it will just show the last 28 days instead.


We've also added the ability to export the raw data you see in any graph straight from the API, straight to CSV or the other formats our exporting of normal (non-graph) reports has always supported. Unlike bar graphs, this method does support hourly data, so that's good. It will honor whatever date range you are viewing in the graph.

The new-ish 'Download' menu we added for graphs has been changed to an icon indicating you can save the graph, in there are the old options to save it in various image formats, and new options below to export the raw data itself.


Also released today were a few new features that have been requested fairly often: A site preference to allow the logging of all bot activity, and a user preference for hiding the current hour in hourly graphs, to avoid panic when seeing a 'cliff' near the beginning of the hour.

These are covered in more detail in this post, so be sure to check those out too and let us know what you think!

Yup, we're still alive. Here's what we've been up to the last 5 months.

Tue, 07 Oct 2014 14:23:56 -0800

It's been 5 months since our last blog post. Normally we are known for our constant updates and posts, so when people started asking us recently if we were still alive, that was understandable. I am here to assure you, we are in fact still here and things are just fine! We did try to enjoy life a little this summer so things were definitely a bit slower, but we still pushed out plenty of updates. Not many of them warranted their own post, but we figured with people wondering our status, we'd let you know what's been happening since our last post in May. More real time!Clicky processes data in batches, which makes things much more efficient than processing every single action as it streams in. Since the beginning (2006!) the batches have been done once a minute (excluding Spy, which is a separate system and does in fact show data live as it streams in). After nearly 8 years, we decided it was time to up the game. We started with halving the interval down to 30 seconds, and then 20. It's still at 20 right now, but the hope is we can get it down to once every 10 seconds. More testing is needed though. We've also been testing a version of Spy that uses web sockets so data is pushed to your browser, making it truly live data, instead of the current pull once every few seconds method. Testing and debugging has taken a lot longer than expected, but we're getting there and hope to have this in production fairly soon. First/third party cookie synchronizationWe use a combination of first and third party cookies to track unique visitors to your site. Third party cookies always took precedence though, which is particularly useful for those of you tracking multiple domains (including sub-domains) under the same site ID. The problem was that if you wanted to query our API live based on their unique (cookie) ID, the cookie that your site would see for that visitor would not necessarily match the cookie we were using on our end, since your site would only see the first party cookie. We've updated the tracking so when a hit comes in, we always update the first party cookie to be the same value as the third party cookie if they don't match. This change allows you to reliably use the first party cookie (_jsuid) value now to query our API. PushState navigation4-5 years ago, hashbang navigation was all the rage for making your site feel like an app. There was simply no alternative. Then browser vendors added support for the new History API, which offers the same benefits as hashbangs, and none of the drawbacks. One day I had had enough and decided to update Clicky to this new method. It only took about a day and we couldn't be happier with the change. Not to mention the pleasant side effect of fixing a serious bug that was nearly 4 years old that I had never been able to figure out until I was rewriting all of this code. The bug was rare but it was a really nasty one, and the moment I figured out the root of the issue was cause for celebration. Weekly/monthly data accuracyClicky stores weekly and monthly data in their own tables to make querying a lot faster over large date ranges. The problem was that these tables were only updated once a day at around 4am PST (GMT -8). So data for the current week and month would always be off because they would never include today. We updated the code to take this into account so these values should always be accurate now. Multiple values for single custom data keysClicky has always logged every piece of custom visitor data you've thrown at it, but for any given key, we've always only shown the last value logged for any specific visitor. We will now show all of the values logged, both in the web UI and in API exports. Code generat[...]

OS X push notification alerts

Tue, 06 May 2014 16:04:19 -0800

We just added OS X push notification support to our alerts system. Push notifications at the OS level require
the latest version of OS X (Mavericks).

Here's what an alert looks like. Click on an alert to be taken straight to the visitor session on Clicky.


To set this up, you need to use Safari on a Mac running Mavericks. Safari is only needed for the initial authorization of push notifications, so if you use a different browser normally, don't panic.

Go to the alert setup page for any of your sites and click the link at the bottom:


You will be prompted to authorize Clicky to send push notifications to your Mac. Once you grant permission, the device (your Mac) will be attached at the user level so if/when you setup alerts for other sites, you will NOT need to go through the authorization process again, the device will already be available to you during the alert setup process.

You can name each Mac after you add it so if you have multiple Macs and you want to get alerts on all of them, you will know which one is which, in case you want to remove any of them later. This includes sub-user accounts; when managing alerts, you will see a list of all devices that belong to all users with access (on Clicky) to the site in question.

HTML5 History API tracking

Wed, 30 Apr 2014 16:15:18 -0800

If you use the HTML5 History API on your site, good news! Clicky now automatically supports it.

This type of navigation typically only reloads a small portion of your page to inject new content, which means our tracking code (previously) would not be executed again since that part of the page would be static -- unless you manually added calls to clicky.log or clicky.pageview when you executed history.pushState or history.replaceState.

Clicky will now automatically track calls to both pushState and replaceState, as well as listen for the window.onpopstate event that occurs when a visitor clicks back or forward in their browser.

If for some reason you need to disable this new functionality, for example you were already logging calls manually, you can do so with the new clicky_custom.history_disable option.

Long term, we want to replace our own custom hashbang-ish navigation that we use on with the History API. This automatic support of logging these methods will be a good motivator.

Update, May 7: We've removed replaceState()from being auto-tracked, as its purpose is quite a bit different from pushState() and was causing some excess data to be tracked that shouldn't have been.


Wed, 09 Apr 2014 09:46:32 -0800

What a crazy couple of days it's been for web site owners around the world.

Clicky was not affected by the Heartbleed security issue. The versions of OpenSSL running on our public servers and load balancers were not any of the ones that were vulnerable. To make doubly sure, we ran tests on all of our public IPs, and they were all reported safe.

If you're going through all of the web sites you have accounts with and changing passwords or something of that nature, you can cross Clicky off the list.

New trend option: 'vs last year'

Sun, 06 Apr 2014 15:55:06 -0800

We have a new trend comparison option that lets you compare reports and graphs vs the same date or date range from the previous year. For example, April 6 2014 vs April 6 2013, or April 1-6 2014 vs April 1-6 2013.

You can see this new option in the date menu for any graph:


You can also set this as your default trend option in your dashboard preferences. In this case, vs last year will always be the default graph comparison, but last year will also be used in to calculate the trends we report (the red/green percentages next to each number in most reports).


HTML5 audio, embedded actions, tracking code verification, and better cookies

Tue, 01 Apr 2014 20:20:46 -0800

We wanted to let you all know about some of the bigger things we've released in the last month that you may not have noticed, because we didn't post about any of them - until now! HTML5 audio, and automatic HTML5 video/audio trackingWe've had support for HTML5 video tracking for a while, it simply required adding an additional javascript file to your code. This month we just added the ability to track audio as well. We combined video and audio together into their own report, now called Media, since they are quite similar in terms of the metrics we track -- and it's unlikely that many sites will have a mix of both. Even better, we made it so you no longer need to include the extra javascript file at all. When the tracking code detects any video or audio elements on your web site, it will now automatically inject the other javascript file into your page so we can track the video and audio files without you having to do anything. Don't worry, if you already had that file included manually, you don't need to delete it. It will work either way. Embedded actions in the visitors reportThis new preference, disabled by default, adds some additional detail into the standard visitors report. Now you will no longer need to click through to see the actions (page views, etc) of each visitor. The first five actions will be displayed right beneath each visitor, with a link below them to see full session details if there are more than five actions. Note that this will slow down the loading of the visitors report by a somewhat noticeable measure since we have to query for a lot more data, but, it's pretty nice if that's what you're into. Here's what it looks like: Here's where you enable this new preference: Tracking code verificationThis is a nice feature to check out if you think you have the code or a plugin installed on your site but you're not getting any stats. There are multiple reasons why tracking may not be working, but this will help you narrow down the list at least. You can access this feature on your tracking code page: Better cookiesThe cookie system we have been using since October 2012 to authenticate yourself when you're on your own web site (which then auto-ignores your visits and loads the on-site analytics widget)... was a bit less than ideal. The biggest problem was that we used a single cookie for two different features (widget and auto-ignore). On top of that, the cookie was always set, and it would stay set (for a year) even if you clicked the logout button. The reason we wanted it to stay set was so that your visits would always be auto-ignored, but it was a bit of a security issue in terms of the same cookie being used to authenticate on-site analytics. The chances of anyone ever seeing anything via the widget that they shouldn't have were microscopic (it would only happen on a shared machine and only if you logged into both Clicky and your web site, and someone after you looked at your history and also visited your web site), but that's no excuse. So, we've changed several things here. There are now two unique cookies: one for the widget and one for auto-ignore. When you logout, the widget cookie is deleted, but the auto-ignore cookie stays set so your own visits will continue to be ignored. Last, the expiration time of the widget cookie was shortened to 90 days (from 1 year previously). That about wraps it up for now, but there's plenty more in the pipeline![...]

New mobile site built with jQuery mobile, and a plugin to automatically track everything

Tue, 25 Feb 2014 17:34:34 -0800

We just launched our new mobile site, in development the last couple of months. The old one was built with iUI, hot for the time (2008) but crusty by today's standards.

jQuery mobile (JQM) was our framework of choice. Since it has all those fancy events, we wanted to track those too. Automatically, of course. With JQM growing in popularity, we decided we'd build a plugin that pairs with our tracking code to track these events automatically, and then release the plugin for everyone to use once we were done. And, we're done.

Instructions for the plugin are available here. Basically you just add a single line of javascript after the Clicky tracking code in your HTML, and that's it. See the link above for more info.

A screenshot of the new mobile site is below. It shows the main dashboard after selecting a site on the previous page. In the old site, you had to select a date range right after selecting a site, because of severe limitations of the framework we were using. Now there are actual menus (amazing, I know) which you can see in the top right: a shortcut to change the date, and the other ones brings up a side panel with more options.

You can graph trends just as you could before, although we're not showing trends here other than right on the main dashboard (we couldn't make trends fit in with the new style of reports). Instead, in a report such as traffic sources, click on the number itself; that will popup a graph and there are options to change the date range.

Let us know what you think.


New option to pass alerts to the API

Tue, 11 Feb 2014 15:43:33 -0800

When you create alerts for something like a goal and configure them to be sent via email or twitter direct messages (RIP), what you get is a short URL that will take you right to the visitor session details on Clicky. A handy, but manual, process.

We had a request today to allow the alert IDs to be passed straight to the analytics API so that automated processes could get more data about these visitors. We thought this was a great idea so hopped right on it, and this is now available.

Full details and some example code are in the docs, but briefly, if you extract the alert ID from the end of the URL then pass that to the API as alert_id=XXX, then we'll look up the session ID (first verifying the session belongs to the site_id and sitekey you also pass in) and convert the request to be as if you had originally sent in a type=visitors-listsession_id=YYY request.

You need to know what you're doing to implement automatic parsing of emails, but for the people who have the ability to do this, we think you will like this new feature quite a lot.

Here's an example API request for an alert we have setup for our blog:

Bing's secure search will be worse than Google's, for most sites

Wed, 15 Jan 2014 13:26:08 -0800

I never realized how much I would come to love Bing in the last year. I don't use it, but now that Google is pretty much 100% secure search across the, almost all of the search phrases that we end up logging are from Bing. Yay for being able to tell what people searched for when coming to our site! Earlier today I saw this announcement that Bing is now testing secure (HTTPS) searches. It's different than Google's though, at least right now. For a small number of sites, it changes nothing. But for most sites, it's much worse. Why? On Google, when do a search over HTTPS, when you click the result it actually sends you through an intermediate, non-HTTPS page. For example if you go to and search for Clicky and click the first result, the URL you actually go to is this: usg=AFQjCNFOPVnz4LaWTBeOHfeCOADfndg5BAsig2=vLGsm0mdBFXCn-4n_CQ11Qbvm=bv.59378465,d.aWccad=rja And then you are redirected to our web site. You can see they hide the search paramter (q=) but because this intermediate page is handled with standard HTTP, at least we still know they came from Google. Better than a kick in the pants, right? The problem with Bing's new secure search is that there is no intermediate page. This means that unless your site uses forced HTTPS across the board, not only will no searches get logged, you won't even know they came from Bing in the first place (because browsers don't send referrers when going from HTTPS -> HTTP). Since there will be no referrer, they will get logged as direct visitors. For those of you with forced HTTPS on your site, you will still be able to see the actual search phrases used. But since very few sites use forced HTTPS, this will impact the vast majority of web sites out there. This hasn't been technically released yet so things might change, but as they stand now, this is how it is. To be clear, it doesn't appear Microsoft is doing this to hide data from web site owners like Google is intentionally doing, but rather, as a way to protect users from NSA spying. That's a great reason for this change, I just really hope they also understand the negative impact this will have on most site owners and add some kind of way for us to detect that the visitor at least arrived from Bing. Hiding the search phrase is one thing, but hiding the source of traffic entirely is just really bad for site owners. There is a small amount of hope they might make it so it works like Google: This change is really going to hurt Bing's marketshare numbers as reported by various services, including us. For this reason alone, I think Microsoft will change things around. They don't want to start seeing headlines about Bing's marketshare plummeting to zero. I wanted to add that all of this secure search stuff impacts all trackers, not just Clicky. And I also wanted to let you know that we have a plan to deal with this, at least when we know the source of traffic is a search engine. As far as I know, no other tracker does anything special with these secure searches. We want to be the first to offer a solution, so I'm not going to go into details now, but I think our proposed solution will work well for most of our customers. Stay tuned.[...]

Authentication improvements to on-site analytics

Sat, 11 Jan 2014 14:41:02 -0800

There's been an authentication bug with the on-site analytics widget (OSA) for about a year. It only affected a small number of customers, so it was hard to track down. This was not a security bug, but rather, OSA would just simply not display for a small number of people. While mucking around with heatmap updates earlier this week, I was also updating some of the OSA code and a lightbulb went off in my dang head as to the cause. Here's how OSA works. When you login to Clicky, a cookie is set for, which is our tracking domain. This cookie is used for authentication purposes. When you visit your own web site, the tracking code tracks your visit and sends that info to If that cookie is set, then we check if it matches the owner of that site ID on our end. If so, then we call some code to display the OSA widget (and at the same time automatically ignore your own visit). OSA was released when our domain was still So at that time, browsers considered the cookie to be first party cookie because it was a sub-domain of the site you were already on. Life was good. Two months after OSA was released, we acquired But we left * as our tracking domains, so people wouldn't have to change their tracking code and everything could just be transparent. This change meant that setting a cookie for when logged on to was now a third party cookie as far as browsers were concerned. So when we started getting emails about OSA not working, we assumed that the cause was third party cookies were disabled in their browser. And in lots of cases, this was the cause. But there were a few people who, no matter what they did, could not get the widget to work. So that lightbulb I was talking about earlier... when tells our tracking code to execute the OSA widget, the widget is loaded from, not We hadn't made any major updates to heatmaps or the widget since the domain change, until earlier this week when I was messing with that code, so I hadn't really dived into that block of code for a while. Sure, I had looked at the code since it was written, but you can't really get into the zen with existing code until you start playing with it. The problem was that the third party cookie would authenticate you on the tracking domain, but unless you had checked the remember me box when logging in to, you had no authentication to load the OSA widget from, so it would just exit out. Most people use the remember me option so that's why this only affected a small number of people. What we've done is now when your beacon (with proper cookie) is sent to, before calling the OSA widget, we tell the tracking code what your sitekey is. The sitekey is used to authenticate requests with the API and other things. So now when OSA tries to load by sending a request to for the data, it's authenticated with the sitekey. I'm always 100% of the time logged in to with the remember me option set. Once I discovered this issue, I logged out and sure enough, OSA would not load for me anymore when I was on, even though my cookie for was still set. Now after making this change, whaddaya know... It works! And it should work for you too if this was affecting you![...]

Alerts for organizations

Thu, 09 Jan 2014 17:48:16 -0800

We had a customer request for being alerted by visits from certain organizations, whether generic in nature (the example given here was school), or something more specific.

Yeah we should have already had this. Took less than 30 minutes to add it. This will be extremely useful for those of you who use Clicky as a lead generation tool.

Another example would be, pretend you work at a newspaper and you have a feeling that a major competing newspaper is stealing your stories. Setup an alert to see when they visit, as shown in the screenshot below. (Not saying NYT does this of course, it's just an example).

This will match both organization names and hostnames.


UPDATE JAN 10: We just made another change to alerts, so the item that triggered the alert is now included in the alert notification. For example if you use wildcards in organization alerts, more than one organization can trigger that alert, but you won't know what the actual organization is until you click through and arrive at our web site. We've had this request before and now especially with the new organization alerts, we figured it was time we added this. This change affects all alert types of course, not just organizations.

While making this change we also found a bug with alerts for dynamic campaigns (ones that use UTM variables). Namely, alerts didn't work for these at all. This has been fixed!

Heatmap updates: Longer storage, better scaling, and new filters

Thu, 09 Jan 2014 15:21:27 -0800

We just pushed some updates to heatmaps today.

First, we've extended the storage time to 6 months, up from 3. When we released these about 15 months ago, we were worried that they would take up too much space if we kept them around for too long, so we were limiting to 3 months of data. It took a while before lots of people were using them, so we couldn't analyze the impact on storage space fairly reliably until recently. Good news, they take up less space than we had anticipated because we made some good decisions on the design side of things. Of course, the data older than 3 months from right now has already been purged, so you don't have 6 months now. But 3 months from now, you will.

Second, we added a few new filter options when using heatmaps via the on site analytics widget. The new filters are under the More... menu, as shown below. These new filters allow you to view heatmaps for a page for just new visitors, returning visitors, visitors online now, or registered visitors. Registered means they have a username custom data field attached to them.

We'll probably add a few other filters as we think of them. What would you like to see here? One thing we considered was some kind of engagement filter, e.g. only visitors who bounced or only visitors who did NOT bounce. However, if you think about it, almost everyone who bounces would not have clicked anything on your site, so it wouldn't really tell you much.


Last, we changed the scaling of the heatmaps. The problem was that for pages with lots of clicks, there would generally be a few extremely hot areas, e.g. in the navigation bar, and they would be so much hotter than the rest of the page, the other clicks would not even be visible. We've changed it so the scale of hotness now only goes from 1 to 10, rather than 1 to whatever it would happen to be as stored in the database.

Here is an example of a heatmap for our homepage using the old method, for all clicks:


Most visitors coming to our site are already customers, so the two login links are going to be the most clicked items, by about a thousand miles. Of course, there are plenty of other clicks on this page (e.g. new people signing up are going to click the big ass register button), but we can't see them unless we apply some filters first, such as only showing visitors who completed our new user goal. It would be real nice to be able to see ALL the clicks on this page though, wouldn't it?

So here's what the new scaling method does to the exact same pool of data in the last screenshot, most of which was completely invisible:


It's certainly noisier but the hot areas still stand out, and yet you can see every single click on the page, because the range from min to max is so much smaller.

Let us know what you think!

Twitter direct messages

Mon, 11 Nov 2013 15:06:02 -0800

Four and a half years ago we released several Twitter features, one of those being to receive alerts via direct message on Twitter. This has been an awesome feature and we have used it ourselves since day one, to be notified of certain goals happening on our site, such as a new paying customer. Earlier this year we released our uptime monitoring feature, and we added Twitter direct message alerts there also.

So we are sad to say that today, these features have been removed, and it's unlikely they will ever come back.

Sometime in mid-October, Twitter added some pretty strict throttling. We never saw any news about this, but within a few days we started getting tweets and emails about Twitter alerts no longer working. A bit of research led us to this page on Twitter's web site, which says that all accounts are now limited to sending 250 direct messages per day, whether via the API or not.

We send many many thousands of direct messages a day, hence this feature is now severely broken. We blow through 250 direct messages by 1 AM most days. So unless Twitter revises this limit to something much higher, this feature is permanently dead.

Many other services have probably been affected by this throttle. It just really bothers me how much Twitter keeps slapping third party developers in the face, the very people who helped build Twitter into what it is today. One thing I know, we will never add another Twitter feature to Clicky, ever. Our Twitter analytics feature still works, thankfully. For now, anyways...

Sticky data: Custom data, referrers, and campaigns saved in cookies

Fri, 27 Sep 2013 13:27:09 -0800

[Do not panic. You do not need to update your tracking code.] We've just pushed a major update to our tracking code so it works a bit more like one aspect of Google Analytics now, that being that some additional data is saved in first party cookies (set with Javascript) for visitors to your site. This data being referrers, dynamic (UTM) campaign variables, and custom data set with clicky_custom.visitor (renamed from clicky_custom.session - don't worry, the old name will still work indefinitely). We're calling this sticky data and the point of it is two fold. First, for referrers and dynamic campaigns, this will better attribute how your visitors originally arrived at your site, as they visit in the future. In other words, if they find your site via a link or search, all future visits to your site where they go directly to your site instead of through a link, this original referrer (and/or campaign) will be attached to these new sessions. This will be particularly useful for those of you who have setup goal funnels using referrers or campaigns. These cookies are set for 90 days. Google does 180, which we think is a bit too long, so we're doing 90 instead. (Note: This does not work for static (pre-defined) campaigns that you create in Clicky's interface. It only works with the dynamic ones created with e.g. utm_campaign etc variables). Second, if you set custom visitor data, we've thought for a while now how great it would be if that data stuck across visits. For example if someone logs in and you attach their username to their session, that's great - everytime they login that is. But what about when they visit your site in the future but don't login? Well, now that we save this data in a cookie, their username will still get attached to their session so you'll still know who they are. utm_custom data will also be saved! These cookies are set indefinitely, more or less. A lot of you use custom visitor data to attach things that are very session specific though, such as shopping card IDs, that kind of thing. With this in mind, there are only 3 specific keys we'll save by default for custom visitor data. Those keys are username, name, and email. Of course, if you have others you want to save in cookies, you can customize it with the new visitor_keys_cookie option. Click that link to learn more. We think the vast majority of you will like this new sticky data. However, if for some reason you don't, we created another new clicky_custom option as well, sticky_data_disable. Setting this option will disable this data being saved to or read from cookies, without having to fully disable cookies. And of course, if you have fully disabled cookies, this data will never get saved in the first place. Originally we wanted to add support for parsing GA's __utmz cookies, which is what GA uses to store campaign and referrer data for 6 months. The cookie format is fairly straight forward but upon investigating our own GA cookies we saw a lot of inconsistency across all the sites that it had been set for. So we're going to hold off on that for now. Our privacy section on cookies has been updated to reflect these updates. Enjoy![...]

New features: Site domains, and ignore pages by URL

Tue, 06 Aug 2013 19:31:59 -0800

We just pushed two new features today.

The first is site domains, where we break down the traffic on your site by the domain name each page view was on. This will only be interesting if your site has multiple sub-domains, or you are tracking more than one root domain under a single site ID - but we know that there are a lot of you out there that this will apply to, as it has been requested a number of times.

This new report is under Content -> Domains. Note that we are only logging the domain name for each page view, to give you a general idea of where your traffic is going. You can't filter other reports by this data.

A commonly requested and related feature is breaking down traffic by directory. We will be adding this in the future.


The second feature is the ability to ignore page views based on their URL. This has been requested by many customers!

This can be setup in your site preferences, as explained in this new knowledge base article. You can enter in one or more patterns to match and all page views that match those patterns will be ignored. Side effects: 1) If a visitor only views pages that match your filters, the visitor will not be logged at all. 2) If a visitor lands on one of these filtered pages but then views other pages, some data only available on their first page view, such as referrer, will not be available.

The help document linked above also explains how to set these up. We support wildcards but only on the end of a pattern, for example. Read it for full details.

We get requests to ignore traffic based on other things too, such as country, referrer, and organization. Some or all of these will likely be added in a future update, and when we do that we'll probably just create an entirely new preferences section just for setting up things you don't want to log, instead of piling more options onto the main site preferences page.

The best return on investment we've made

Tue, 30 Jul 2013 17:38:04 -0800

Clicky is almost 7 years old and we've always been a very small team. We've handled all of the email ourselves since the beginning, using gmail, which we like. I doubt there's a single human being that enjoys doing tech support, but it's critical to customer satisfaction. It's also one of those things that quickly spirals out of control if you're not on top of it every single day, and it takes away precious time from what we really want do (write code). At our peak we were getting almost 50 emails a day. Now we're down to about 20. How did we do that? Black magic? No. What we did was build a knowledge base with hundreds of topics (over 300 and counting), including many guides/howto's and tons of the most common problems people have. Aside from hoping to get less emails, we really just wanted thorough guides and articles with screenshots to make things as easy as possible to understand for new and old customers alike, instead of having to type the same email 10 times a day when someone asks where's my tracking code?. When a customer reads an article we link them to, they're now aware of the knowledge base if they weren't already, and will hopefully turn there first in the future when in need of help. We released this silently back in March and the effect was immediately noticeable. Email volume dropped in half almost overnight, and has slowly declined a bit more since then as more people become aware of it. It also helps that the contact page is now only available as a link from the knowledge base page. We're not trying to hide our contact info like some companies do, rather we just want people to see the knowledge base first. It was a solid two weeks of work coming up with a list of articles to write, categorizing them into a tree, cross-linking them, and writing them all out with screenshots etc. It was an extremely boring and repetitive two weeks, but let me tell you: it was worth every dreadful second. It's been especially insightful using our heatmap tracking to see what items people are most interested in on each page, and how many people are exploring the knowledge base rather than clicking the link to contact us. The only thing missing it from it was a search form. That's part of the reason for today's post, to announce that we finally added a search form to the page. The reason it didn't have one before was because I was going to write a custom search with customized weighting and things like that. But out of the blue yesterday, it was suggested to me that I just use Google's search for it. Not a bad idea, I thought. At the very least it's better than a kick in the pants. It only took about 5 minutes of work to do that, which was great, but also made me sad I didn't think of this before. Anyways, when you do a search now, it redirects you to with inurl:/help/ appended onto the end of your search, so the results are only from our help section (which is mainly the knowledge base). Yeah, I could do their embedded search thing, but I don't like that. I was not planning to leave it as a Google search forever, but then I thought about the thousands of man-years of experience that Google has building a search engine, and frankly they're pretty dang good at it. With things like automatic spelling correction, acco[...]

CDN fault-tolerance

Thu, 18 Jul 2013 17:04:06 -0800

This is a post for web developers. What do you do when your CDN fails to serve a resource to a visitor? For a year or two, we've had a message at the top of our site that more or less said If you're seeing this message, our style sheet has failed to load. Try clearing your cache, wait a few minutes and try again, etc. Of course you wouldn't see this message if it our stylesheet had loaded, because our stylesheet had a rule to hide it. That's a reasonable first step. At least we're communicating that we know something is wrong. But we still get customers emailing in a few times a month and the CDN just wouldn't work no matter what they tried. Maybe a CDN server in their location was having an outage, even though traffic should get routed around that, failures still happen. Maybe the customer's DNS is failing. Lots of possibilities, and almost always impossible to track down because eventually, within a day or two at the most, it would always start working for them again. Clicky isn't usable without CSS (I know, I know...), and minorly broken without javascript (e.g. graphs). This kind of problem is just annoying, so I finally resolved to fix it. As any page on our site is loading, we use a few inline javascript tests to see if our master javascript file and master CSS file have loaded. If we detect a failure for either of those, then we try to load the resource a second time, directly from our web server. The code looks like this: As you can see, this goes right at the top of the BODY tag. There are two reasons we need to wait until the BODY tag. First, we are creating an element to test its visibility immediately. We have to wait until BODY to do that. Second, if there is a CSS failure, we need to inject a new CSS element into the HEAD tag, but we can't do that until we are closed out of the HEAD tag. So, we wait until BODY. Here's how the code works. First, we create an empty element, #cdnfailtest, which we'll be testing the visibility of. Then we bust into javascript for the testing. We want to use jQuery to test if this faux element is actually visible (because jQuery makes that very simple), so before doing that, we check if our javascript file has loaded by testing for window.jQuery. If that test fails, then our minified javascript wasn't loaded, so we need to load it from the web server. Unfortunately, we have no choice but to use document.write() in order to guarantee the script file is loaded immediately. I did notice during testing however, even with document.write(), the javascript file wasn't always immediately available. It was a bit random on every refresh whether or not it worked. So we wrap the rest of the code within a setTimeout() call to delay it slightly (500 milliseconds). After the 500 milliseconds, we start the CSS visibility test. We use jQuery to find the element #cdnfailtest and see if it's visible. (If jQuery still isn't loaded, we assume something is wrong and force load the CSS from the web server anyways, just to be safe). If it is visible, that means the CSS did not load, because otherwise its rules would hide the element. This is the part where we need the HEAD element to be already fully declared, so we can attach and inject the CSS file into it. That's what the rest of the [...]