Mon, 03 Oct 2016 13:30:04 +0000I enjoy exploring an ecosystem. Last year, I spent a month with an Android phone and tablet to see how they compared to iOS. Now, I'm going to try something similar by switching to Windows as my primary machine. Dave Rupert went through a similar exercise. Microsoft was kind enough to offer up a Surface Book for this experiment. When the Surface Book was announced, lots of people gave it very positive reviews. I have a Samsung tablet from a few years ago when Microsoft was exploring what it meant for a desktop OS to also be a tablet OS in Windows 7. That Samsung tablet, however, was bulky, ran hot, and the battery ran down quickly. Ick. Initial Impressions My first hour or so with the Surface Book created mixed feelings. On the hardware side, this laptop is quite nice. The detachable screen becomes a tablet and is surprisingly light. The keyboard feels good. The trackpad feels good. This is a really nice laptop. Having a laptop with a touch screen is nice. For the most part, I use the keyboard where I can but sometimes a button press is easier to just hit with my finger. The trackpad finger gestures are nice and remind me of the handy gestures on the Mac. I enjoy the three-finger swipe to switch applications or to minimize all the apps. I do find the multiple desktop feature annoying. Mostly because it creates isolated app switching for each desktop. If the app you want to switch to is on the other desktop then you need to switch desktops and then switch apps. One particular thing that bugs me: the top row of function and media keys. There's plenty of room to have had a separate function key row and media row. Instead, there's a function key that has to be turned on or off to choose between them. I never use the function keys on my Mac but in Windows, I use them all the time. I have to consciously be aware of whether the key is enabled, which is more cognitive load than I would like every time I need to use a function or media key. Speaking of the media keys, I'd rather the ability to control screen brightness via the keyboard (which I do multiple times a day) than the ability to control the brightness of the keyboard backlighting (which I do almost never). Also, I find it weird that when the laptop is folded over, it doesn't fold flush. Instead, it looks like a book with a pen stuck in the middle of it. (Hence the name, is my guess.) On the software side, I'm having a tough time pinpointing the things that really bother me. I think a lot of it comes down to polish, which in a lot of cases, comes down to how well third party apps built their apps. For example, I'm writing this article in an app called WriteMonkey. It's a fantastic app for writing in Markdown. It runs full screen and gets out of the way. Love it. The Twitter app, on the other hand, sometimes blanks the screen before reloading messages. The Facebook app seems to have a different scroll sensitivity than the rest of the OS. Edge supports two-finger swipe to go back but nothing else does. And Mail seems to have its own text entry with a custom context menu and the shortcut to paste without formatting (Ctrl-Shift-V) is mapped to something else. Oh, and Messenger locks up quite frequently, requiring a restart. Of course, I'm sure I could come up with a similar list for the Mac. I've just been on it long enough to get over the annoyances. When Windows 7 first came out, it felt spartan to the point of feeling unfinished. Windows 10 definitely has a lot more polish. Also, the Cortana voice recognition seems to work really well. I can't tell you how many times I ask Siri for something and she completely gets it wrong. "Play my loved playlist." "Now playing Love Yourself by Justin Bieber." "Goddammit Siri!" The Cortana integration in the Start menu is really nice, too. I like the speed and design. It feels very natural to just hit the Windows key and start typing for what I want. It finds what I want accurately and quickly. Living on the Edge I want to use Edge as my default browser but the biggest hiccup is my password man[...]
Thu, 22 Sep 2016 20:37:47 +0000
I hear this question quite a bit lately. Our industry feels like it’s expanding exponentially with new techniques and technologies. People feel overwhelmed and unsure how to ingest it all.
I’ve found that I have 3 phases to my learning process:
I read a lot. I’ll click on links from Hacker News, Facebook, and Twitter. I’ll read about new techniques and new technologies and integrate those learnings into what I already know.
This is very superficial. With this knowledge, I can refer back to things if somebody asks about how to solve a particular problem. I couldn’t necessarily apply the approach myself yet but I have enough to know that a solution exists. That in itself can be quite useful.
From there, when I want to learn more about a given thing, I build something with it. Most recently, I wanted to learn about web beacons and took the time to make a beacon to see how it worked. I do this frequently. I’ll build small one-page apps to test out a concept. The exercise may take me anywhere from an hour to a week to build.
Building something expands my knowledge on a topic and now I can speak more authoritatively on the pros and cons of why and when you’d want to use such a technique or technology.
The last step is to write about it. This could be a blog post, a book, or a conference talk. When I write about a topic, I explore the edges of what I know, the edges outside of what I needed to initially implement the idea.
For example, it’s one thing to know that web beacons exist. It’s another thing to know how to implement them. It’s another thing to know the range and other limitations that exist.
In writing a blog post about
all:initial, I forced myself to test in every browser and discovered how inconsistent the implementation was.
It’s not necessary to go through this process for everything. You can stay at the superficial level for many things and only dive deeper when you need to, like implementing an idea on a client project.
Likewise, you don’t need to write about everything you work on. Writing is, for me, how I learn a topic more intimately. Not everything needs to be learned that deeply.
As your career develops, you’ll gain a sense of what things to explore sooner rather than later, or not to explore at all.
I have a list of things I’d like to explore—like progressive web apps, service workers, and web components—and when I do, I’ll go through this same process again and again.
Thu, 15 Sep 2016 14:46:00 +0000After hearing about the Physical Web through a second-hand tweet, I decided to look into what it is and why we want it. The Physical Web is essentially a way to broadcast a URL using Bluetooth (Bluetooth Low Energy or BLE, specifically). Devices can then listen for those beacons. The beacons can broadcast to a distance from a few centimetres away to possible 450 metres away. The Physical Web uses Eddystone, an open source beacon specification that Google proposed and is a competitor to Apple’s proprietary iBeacon technology. Google released Eddystone in July 2015. What’s the Frequency, Kenneth My initial reaction was “cool!” The practical applications of this could be quite numerous. For example, the web site shows a dog tag or a parking meter broadcasting. Stores could feature sales as you walk into them and have those broadcast directly to your phone. In Uri† Shaked’s overview of the Physical Web, he talks about being able to broadcast conference slides while doing a talk. Conferences could broadcast the day’s schedule. I could imagine going by a restaurant and being able to load up their menu via a beacon. Bus stops could broadcast maps and times. The QR Code of the Future Sadly, my mind quickly devolved into the annoyance of numerous notifications, like popup windows and other distracting adverts, vying for my attention. Imagine that same conference with companies pitching their wares or recruiters filling up your notifications. There could quickly be so many notifications as to make them near useless. Walking into a store, your phone buzzing with dozens of product sales, as companies pay for beacons and shelf space. The people behind the implementation of beacons at Google have worked hard to make sure they're not annoying. On Android, all beacons are wrapped up into a single silent notification. Essentially, you have to seek out beacons rather than have beacons nag you. Ultimately, though, beacons feel like QR codes. They’ll be all over the place and, for the most part, ignored. Priorities With the possible onslaught of beacons, some type of filtering or prioritization would seem ideal. Otherwise, I think most people would just rather choose to have beacons turns off, which wouldn’t really be of much use to those who use beacons. (Uri recognizes this issue in the comments of his article.) Trying them out Uri’s article does a great job of describing how to set up Beacons on your laptop or Raspberry Pi, and how to configure your iOS or Android devices to listen for them. Broadcasting a Beacon using Node On my Mac, I was able to broadcast a beacon with a couple easy lines: npm install --save eddystone-beacon node -e "require('eddystone-beacon').advertiseUrl('https://snook.ca/');" Of note, URLs need to be https. I tried specifying an http URL as a beacon and it couldn’t be found. I tried specifying an http URL that redirects to https and it could be found. I’m not sure if it’s the sender or the receiver that’s doing the double-check on the URL. (Also, you might have to use sudo to get npm to install everything correctly.) Listening to Beacons On iOS, add Chrome to the Today screen. You’ll be prompted to listen to Physical Web beacons. Don’t worry, you can disable it later. On Android, notifications will come up automatically. Da Bears I’m still bearish on beacons but I like the potential. I may try setting up beacons at upcoming conferences and workshops to see how well it works. † How cool is it to have a name like URI (Uniform Resource Identifier)?! Updated to point out that beacon notifications are silent.[...]
Wed, 03 Aug 2016 18:25:07 +0000I’ve been fortunate over the past decade to have been able to, in various capacities, work from home—or work in place, as some like to call it. First as a freelancer, then Yahoo!, then again when I went to work at Xero, and now back to working for myself. Shopify has been the exception to the rule in that time and while I tried to instill a culture of remote work, I don’t think I managed to move the needle much within the organization. So, after all that time, what have I seen that works and doesn’t work? If I were to start my own company, would I allow remote workers? If I were to join another company, how would I foster an environment that encouraged remote work? What works With the hindsight that comes from experience, what made it work? For me, it was about being given a discrete task with few or no dependencies. That’s it in a nutshell. When I was a freelancer, I worked with clients to determine the scope of work, was given the autonomy to do the work, and then delivered the work. At Yahoo!, especially in the beginning, I had a defined role that didn’t require much communication with the team or teams at large. I would get design work and then build out the front-end based on those. Again, being given the autonomy and trust to build things as I felt they should be built meant that I had very few dependencies. There was very little that would slow down my ability to produce work. (Well, except dealing with my own distractions—something that has become more difficult of late.) It was fun to hop into a conference call from South Africa. Or to have a co-worker contact me from the passenger seat of a car driving the coast of Italy. I’ve done conference talks from hotel rooms. I’ve done work meetings from coffee shops around the world. What didn’t work Not everything was peachy, though. At Yahoo!, I was placed into a managerial role. Managing a team remotely, for example, didn’t go so well. I needed to be in lots of meetings and if people didn’t log into the teleconference, I was left out of the loop. I didn’t really know how to manage a team and that training was never provided, nor was it requested. In the early days of my time at Shopify, I tried to convince management that supporting remote workers would make us a better company. But I didn’t know how to go about making remote work for the company. As a result, the initiatives that were implemented just fizzled. Instead, each remote office was given tasks that allowed them to work more autonomously. At Xero, my struggle was mostly in dealing with time zones. Living in a time zone 8 hours away meant that meetings often fell in the evenings when I had family commitments. As a result, I missed countless meetings. While my team never made mention of it, I felt increasingly out of the loop and ineffectual. Would I Lead a Remote Team? Given my experience, would I build and lead a remote team? Yes. Now that I have some experience behind me, I think I could make it work. To do so, here are some things I would do: Have discrete tasks Give people ownership over something and give them the autonomy to build that thing with little initial oversight. There are opportunities through code reviews, design reviews, and other exercises to ensure that people are on the right track. Otherwise, get out of their way. Communicate in the open When you have some people in an office and some people out of the office, it’s easy for some communication to be isolated. The team in the office might come to a conclusion and never communicate it outside of the room that decision was made. For distributed teams, that’s awful. Communicate in the open. Every decision is documented. Find a place, be it Slack, GitHub, Trello, or wherever, and get the word out. Fostering Remote in a Company Considering my failures at Shopify, I’ve wondered how, if going into a new company, I could enable remote work. I think I would take a more grassroots approach. I’d implem[...]