Subscribe: SitePoint
Added By: Feedage Forager Feedage Grade A rated
Language: English
automation  build  character  code  css  data  memory  performance  protopie  prototypes  queries  software  test  time  tool 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: SitePoint


Learn CSS | HTML5 | JavaScript | Wordpress | Tutorials-Web Development | Reference | Books and More

Last Build Date: Thu, 23 Nov 2017 12:00:31 +0000


Optimizing CSS: Tweaking Animation Performance with DevTools

Thu, 23 Nov 2017 12:00:31 +0000

This article is part of a series created in partnership with SiteGround. Thank you for supporting the partners who make SitePoint possible. CSS animations are known to be super performant. Although this is the case for simple animations on a few elements, add more complexity, and if you didn't code your animations with performance in mind, website users will soon take notice and possibly get annoyed. In this article, I introduce some useful browser dev tools' features that will enable you to check what happens under the hood when animating with CSS. This way, when an animation looks a bit choppy, you'll have a better idea why and what you can do to fix it. Developer Tools for CSS Performance Your animations need to hit 60 fps (frames per second) to run fluidly in the browser — the lower the rate the worse your animation will look. This means the browser has no more than about 16 milliseconds to do its job for one frame. But what does it do during that time? And how would you know if your browser is keeping up with the desired framerate? I think nothing beats user experience when it comes to assess the quality of an animation. However, developer tools in modern browsers, while not always being 100% reliable, have been getting smarter and smarter and there's quite a bit you can do to review, edit and debug your code using them. This is also true when you need to check framerate and CSS animation performance. Here's how it works. Exploring the Performance Tool in Firefox In this article I use Firefox Performance Tool, the other big contender is Chrome Performance Tool. You can pick your favorite, as both browsers offer powerful performance features. To open the developer tools in Firefox, choose one of these options: Right-click on your web page and choose Inspect Element in the context menu If you use the keyboard, press Ctrl + Shift + I on Windows and Linux or Cmd + Opt + I on OS X. Next, click on the Performance tab. Here, you'll find the button that lets you start a recording of your website's performance: Press that button and wait for a few seconds or perform some action on the page. When you're done, click the Stop Recording Performance button: In a split second Firefox presents you with tons of well-organized data that will help you make sense of which issues your code is suffering from. The result of a recording inside the Performance panel looks something like this: The Waterfall section is perfect for checking issues related to CSS transitions and keyframe animations. Other sections are the Call Tree and the JS Flame Chart, which you can use to find out about bottlenecks in your JavaScript code. The Waterfall has a summary section at the top and a detailed breakdown. In both the data is color-coded: Yellow bars refer to JavaScript operations Purple bars refer to calculating HTML elements’ CSS styles (recalculate styles) and laying out your page (layout). Layout operations are quite expensive for the browser to perform, so if you animate properties that involve repeated layouts (also known as reflows), e.g., margin, padding, top, left, etc., the results could be janky Green bars refer to painting your elements into one or more bitmaps (Paint). Animating properties like color, background-color, box-shadow, etc., involves costly paint operations, which could be the cause of sluggish animations and poor user experience. You can also filter the type of data you want to inspect. For instance, I'm interested only in CSS-related data, therefore I can deselect everything else by clicking on the filter icon at the top left of the screen: The big green bar below the Waterfall summary represents information on the framerate. A healthy representation would look quite high, but most importantly, consistent, that is, without too many deep gaps. Let's illustrate this with an example. The Performance Tool In Action Continue reading %Optimizing CSS: Tweaking Animation Performance with DevTools% [...]

Case Study: Optimizing CommonMark Markdown Parser with

Thu, 23 Nov 2017 06:12:36 +0000

As you may know, I am the author and maintainer of the PHP League's CommonMark Markdown parser. This project has three primary goals: fully support the entire CommonMark spec match the behavior of the JS reference implementation be well-written and super-extensible so that others can add their own functionality. This last goal is perhaps the most challenging, especially from a performance perspective. Other popular Markdown parsers are built using single classes with massive regex functions. As you can see from this benchmark, it makes them lightning fast: Library Avg. Parse Time File/Class Count Parsedown 1.6.0 2 ms 1 PHP Markdown 1.5.0 4 ms 4 PHP Markdown Extra 1.5.0 7 ms 6 CommonMark 0.12.0 46 ms 117 Unfortunately, because of the tightly-coupled design and overall architecture, it's difficult (if not impossible) to extend these parsers with custom logic. For the League's CommonMark parser, we chose to prioritize extensibility over performance. This led to a decoupled object-oriented design which users can easily customize. This has enabled others to build their own integrations, extensions, and other custom projects. The library's performance is still decent --- the end user probably can't differentiate between 42ms and 2ms (you should be caching your rendered Markdown anyway). Nevertheless, we still wanted to optimize our parser as much as possible without compromising our primary goals. This blog post explains how we used Blackfire to do just that. Profiling with Blackfire Blackfire is a fantastic tool from the folks at SensioLabs. You simply attach it to any web or CLI request and get this awesome, easy-to-digest performance trace of your application's request. In this post, we'll be examining how Blackfire was used to identify and optimize two performance issues found in version 0.6.1 of the league/commonmark library. Let's start by profiling the time it takes league/commonmark to parse the contents of the CommonMark spec document: Later on we'll compare this benchmark to our changes in order to measure the performance improvements. Quick side-note: Blackfire adds overhead while profiling things, so the execution times will always be much higher than usual. Focus on the relative percentage changes instead of the absolute "wall clock" times. Optimization 1 Looking at our initial benchmark, you can easily see that inline parsing with InlineParserEngine::parse() accounts for a whopping 43.75% of the execution time. Clicking this method reveals more information about why this happens: Here we see that InlineParserEngine::parse() is calling Cursor::getCharacter() 79,194 times --- once for every single character in the Markdown text. Here's a partial (slightly-modified) excerpt of this method from 0.6.1: public function parse(ContextInterface $context, Cursor $cursor) { // Iterate through every single character in the current line while (($character = $cursor->getCharacter()) !== null) { // Check to see whether this character is a special Markdown character // If so, let it try to parse this part of the string foreach ($matchingParsers as $parser) { if ($res = $parser->parse($context, $inlineParserContext)) { continue 2; } } // If no parser could handle this character, then it must be a plain text character // Add this character to the current line of text $lastInline->append($character); } } Blackfire tells us that parse() is spending over 17% of its time checking every. single. character. one. at. a. time. But most of these 79,194 characters are plain text which don't need special handling! Let's optimize this. Instead of adding a single character at the end of our loop, let's use a regex to capture as many non-special characters as we can: public function parse(ContextInterface $context, Cursor $cursor) { // Iterate through every single character in the current line while (($character = $cursor->getCharacter()) !== n[...]

How to Optimize Docker-based CI Runners with Shared Package Caches

Tue, 21 Nov 2017 17:00:28 +0000

At Unleashed Technologies we use Gitlab CI with Docker runners for our continuous integration testing. We've put significant effort into speeding up the build execution speeds. One of the optimizations we made was to share a cache volume across all the CI jobs, allowing them to share files like package download caches.

Continue reading %How to Optimize Docker-based CI Runners with Shared Package Caches%

Upgrade Your Project with CSS Selector and Custom Attributes

Mon, 20 Nov 2017 17:30:57 +0000

Element selectors for Selenium WebDriver are one of the core components of an automation framework and are the key to interaction with any web application. In this review of automation element selectors, we will discuss the various strategies, explore their capabilities, weigh their pros and cons, and eventually recommend the best selector strategy – custom attributes with CSS selector.

Continue reading %Upgrade Your Project with CSS Selector and Custom Attributes%

ProtoPie, the Hi-Fi Prototyping Tool That Will Improve Your Workflow

Mon, 20 Nov 2017 17:30:36 +0000

This article was originally published on ProtoPie. Thank you for supporting the partners who make SitePoint possible. As a designer, bridging the gap with stakeholders is utterly important. Properly conveying design and interaction ideas quickly and rapidly with solely static UI designs, mockups, wireframes and even simple click-through prototypes simply doesn’t work. This is what Tony Kim thought. During his time at Google as an Interaction Designer, he wanted to build highly interactive prototypes easily and quickly in order for him to be able to share his ideas clearly and ultimately bridge the gap between him and his stakeholders. The tools at his disposal didn’t allow him to do this easily and quickly. Easy tools didn’t provide the high-fidelity Kim was looking for while other tools for more advanced prototyping usually had a steep learning curve and/or required coding, leading to a lengthy prototyping process. This is how brainchild ProtoPie was born. This article will give you a brief overview on what ProtoPie is, its philosophy, and why you should adopt ProtoPie as your primary prototyping tool to improve your workflow. What is ProtoPie? width="560" height="315" src="" frameborder="0" allowfullscreen> ProtoPie is a powerful hi-fi prototyping tool on Mac and Windows for mobile apps that empowers designers to build the most advanced, highly interactive prototypes easily and quickly deployable and shareable on any device while utilizing smart device sensors. The philosophy behind ProtoPie is that high-fidelity prototyping should be done easily and quickly. Tony Kim, founder of ProtoPie, explains: “I believe in hi-fi prototyping. The prototypes that any designer should make are the ones that resemble the real deal, in regards to the way the user interacts.” As hi-fi interactions are key in the design process, the golden formula of ProtoPie surrounding interactions is simple, straightforward and runs like a thread through ProtoPie: interaction = object + trigger + response This concept model serves as the foundation of ProtoPie’s user interface lowering the threshold to build high-fidelity prototypes while making the learning curve truly gradual. Due to ProtoPie’s ease of use, gradual learning curve and intuitive user interface, ProtoPie won the Red Dot Award 2017 for Interface Design. What Can ProtoPie Do? As you might know already by now, the core purpose of ProtoPie is to empower designers to build hi-fi prototypes easily and quickly. Many designers out there still believe that advanced prototyping without coding is not possible. This is simply not true. By piecing some hi-fi interactions together, you may already have a working interactive prototype within minutes. See here how ProtoPie shows how easy it is to create interactions according to its golden formula that you can test right away. ProtoPie distinguishes itself from other tools out there by supporting sensors in smart devices. To make prototypes feel as if they are the real deal when deploying on any smart device, built-in sensors simply need to be taken into consideration. The sensors that are supported by ProtoPie are Tilt, Sound, Compass, 3D Touch and Proximity. But it doesn’t stop there. You can create interactions across multiple devices using ProtoPie. This provides designers with more freedom on creating high-fidelity prototypes. Using the Send response and Receive trigger in ProtoPie, it’s possible to send and receive present messages upon establishing a link among devices. Besides being able to create hi-fi prototypes easily and quickly, designers are given various options on how they can deploy and share their creations with stakeholders. You can use both ProtoPie Player (available on iOS and Android) and ProtoPie Cloud to share prototypes easily with stakeholders. You can run and test prototypes using ProtoPie Player, desktop browser or mobile browser. You can save [...]

How to Optimize SQL Queries for Faster Sites

Mon, 20 Nov 2017 17:00:57 +0000

This article was originally published on the Delicious Brains blog, and is republished here with permission. You know that a fast site == happier users, improved ranking from Google, and increased conversions. Maybe you even think your WordPress site is as fast as it can be: you've looked at site performance, from the best practices of setting up a server, to troubleshooting slow code, and offloading your images to a CDN, but is that everything? With dynamic, database-driven websites like WordPress, you might still have one problem on your hands: database queries slowing down your site. In this post, I’ll take you through how to identify the queries causing bottlenecks, how to understand the problems with them, along with quick fixes and other approaches to speed things up. I’ll be using an actual query we recently tackled that was slowing things down on the customer portal of Identification The first step in fixing slow SQL queries is to find them. Ashley has sung the praises of the debugging plugin Query Monitor on the blog before, and it’s the database queries feature of the plugin that really makes it an invaluable tool for identifying slow SQL queries. The plugin reports on all the database queries executed during the page request. It allows you to filter them by the code or component (the plugin, theme or WordPress core) calling them, and highlights duplicate and slow queries: If you don’t want to install a debugging plugin on a production site (maybe you’re worried about adding some performance overhead) you can opt to turn on the MySQL Slow Query Log, which logs all queries that take a certain amount of time to execute. This is relatively simple to configure and set up where to log the queries to. As this is a server-level tweak, the performance hit will be less that a debugging plugin on the site, but should be turned off when not using it. Understanding Once you have found an expensive query that you want to improve, the next step is to try to understand what is making the query slow. Recently during development to our site, we found a query that was taking around 8 seconds to execute! SELECT l.key_id, l.order_id, l.activation_email, l.licence_key, l.software_product_id, l.software_version, l.activations_limit, l.created, l.renewal_type, l.renewal_id, l.exempt_domain, s.next_payment_date, s.status, pm2.post_id AS 'product_id', pm.meta_value AS 'user_id' FROM oiz6q8a_woocommerce_software_licences l INNER JOIN oiz6q8a_woocommerce_software_subscriptions s ON s.key_id = l.key_id INNER JOIN oiz6q8a_posts p ON p.ID = l.order_id INNER JOIN oiz6q8a_postmeta pm ON pm.post_id = p.ID AND pm.meta_key = '_customer_user' INNER JOIN oiz6q8a_postmeta pm2 ON pm2.meta_key = '_software_product_id' AND pm2.meta_value = l.software_product_id WHERE p.post_type = 'shop_order' AND pm.meta_value = 279 ORDER BY s.next_payment_date We use WooCommerce and a customized version of the WooCommerce Software Subscriptions plugin to run our plugins store. The purpose of this query is to get all subscriptions for a customer where we know their customer number. WooCommerce has a somewhat complex data model, in that even though an order is stored as a custom post type, the id of the customer (for stores where each customer gets a WordPress user created for them) is not stored as the post_author, but instead as a piece of post meta data. There are also a couple of joins to custom tables created by the software subscriptions plugin. Let’s dive in to understand the query more. MySQL is your Friend MySQL has a handy statement DESCRIBE which can be used to output information about a table’s structure such as its columns, data types, defaults. So if you execute DESCRIBE wp_postmeta; you will see the following resu[...]

How to Read Big Files with PHP (Without Killing Your Server)

Thu, 16 Nov 2017 18:00:05 +0000

It’s not often that we, as PHP developers, need to worry about memory management. The PHP engine does a stellar job of cleaning up after us, and the web server model of short-lived execution contexts means even the sloppiest code has no long-lasting effects. There are rare times when we may need to step outside of this comfortable boundary --- like when we're trying to run Composer for a large project on the smallest VPS we can create, or when we need to read large files on an equally small server. It’s the latter problem we'll look at in this tutorial. The code for this tutorial can be found on GitHub. Measuring Success The only way to be sure we’re making any improvement to our code is to measure a bad situation and then compare that measurement to another after we’ve applied our fix. In other words, unless we know how much a “solution” helps us (if at all), we can’t know if it really is a solution or not. There are two metrics we can care about. The first is CPU usage. How fast or slow is the process we want to work on? The second is memory usage. How much memory does the script take to execute? These are often inversely proportional --- meaning that we can offload memory usage at the cost of CPU usage, and vice versa. In an asynchronous execution model (like with multi-process or multi-threaded PHP applications), both CPU and memory usage are important considerations. In traditional PHP architecture, these generally become a problem when either one reaches the limits of the server. It's impractical to measure CPU usage inside PHP. If that’s the area you want to focus on, consider using something like top, on Ubuntu or macOS. For Windows, consider using the Linux Subsystem, so you can use top in Ubuntu. For the purposes of this tutorial, we’re going to measure memory usage. We’ll look at how much memory is used in “traditional” scripts. We’ll implement a couple of optimization strategies and measure those too. In the end, I want you to be able to make an educated choice. The methods we’ll use to see how much memory is used are: // formatBytes is taken from the documentation memory_get_peak_usage(); function formatBytes($bytes, $precision = 2) { $units = array("b", "kb", "mb", "gb", "tb"); $bytes = max($bytes, 0); $pow = floor(($bytes ? log($bytes) : 0) / log(1024)); $pow = min($pow, count($units) - 1); $bytes /= (1 << (10 * $pow)); return round($bytes, $precision) . " " . $units[$pow]; } We’ll use these functions at the end of our scripts, so we can see which script uses the most memory at one time. What Are Our Options? There are many approaches we could take to read files efficiently. But there are also two likely scenarios in which we could use them. We could want to read and process data all at the same time, outputting the processed data or performing other actions based on what we read. We could also want to transform a stream of data without ever really needing access to the data. Let’s imagine, for the first scenario, that we want to be able to read a file and create separate queued processing jobs every 10,000 lines. We’d need to keep at least 10,000 lines in memory, and pass them along to the queued job manager (whatever form that may take). For the second scenario, let’s imagine we want to compress the contents of a particularly large API response. We don’t care what it says, but we need to make sure it’s backed up in a compressed form. In both scenarios, we need to read large files. In the first, we need to know what the data is. In the second, we don’t care what the data is. Let’s explore these options… Reading Files, Line By Line There are many functions for working with files. Let’s combine a few into a naive file reader: // from memory.php function formatBytes($bytes, $precision = 2) { $units = array("b", "kb", "mb", "gb", "t[...]

Essential Skills for Landing a Test Automation Job in 2018

Wed, 15 Nov 2017 17:30:54 +0000

Every year brings new requirements in the test automation market. Test automation engineers must master their skills in order to stay ahead and land the job of their dreams. Following our last research: World’s Most Desirable Test Automation Skills, TestProject examined top job searching websites around the world to determine the most demanded test automation skills and technologies for 2018.

Continue reading %Essential Skills for Landing a Test Automation Job in 2018%

Automate CI/CD and Spend More Time Writing Code

Wed, 15 Nov 2017 08:00:13 +0000

This article was sponsored by Microsoft Visual Studio App Center. Thank you for supporting the partners who make SitePoint possible. What’s the best part about developing software? Writing amazing code. What’s the worst part? Everything else. Developing software is a wonderful job. You get to solve problems in new ways, delight users, and see something you built making lives better. But for all the hours we spend writing code, there are often just as many spent managing the overhead that comes along with it—and it’s all a big waste of time. Here are some of the biggest productivity sinkholes, and how we at Microsoft are trying to scrape back some of that time for you. 1. Building What’s the first step to getting your awesome app in the hands of happy users? Making it exist. Some may think moving from source code to binary wouldn’t still be such a pain, but it is. Depending on the project, you might compile several times a day, on different platforms, and all that waiting is time you could have spent coding. Plus, if you’re building iOS apps, you need a Mac build agent—not necessarily your primary development tool, particularly if you’re building apps in a cross-platform framework. You want to claim back that time, and the best way to do that is (it won’t be the last time I say this) automation. You need to automate away the configuration and hardware management so the apps just build when they’re supposed to. Our attempt to answer that need is Visual Studio App Center Build, a service that automates all the steps you don’t want to reproduce manually, so you can build every time you check in code, or any time you, your QA team, or your release managers want to. Just point Build at a Github, Bitbucket, or VSTS repo, pick a branch, configure a few parameters, and you’re building Android, UWP, and even iOS and macOS apps in the cloud, without managing any hardware. And if you need to do something special, you can add post-clone, pre-build, and post-build scripts to customize. 2. Testing I've spent many years testing software, and throughout my career, there were three questions I always hated hearing: “Are you done yet?” “Can you reproduce it?” “Is it really that bad?” In the past, there’s rarely been enough time or resources for thorough, proper testing, but mobile development has exacerbated that particular problem. We now deliver more code, more frequently to more devices. We can’t waste hours trying to recreate that elusive critical failure, and we don’t have time to argue over whether a bug is a showstopper. At the same time, we’re the gatekeepers who are ultimately responsible for a high-visibility failure or a poor-quality product, and as members of a team, we want to get ahead of problems to increase quality, rather than just standing in the way of shipping. So what’s the answer? “Automation,” sure. But automation that makes sense. Spreadsheets of data and folders of screenshots mean nothing if you can’t put it all together. When you’re up against a deadline and have to convince product owners to make a call, you need to deliver information they can understand, while still giving devs the detail they need to make the fix. To help with that, we’ve created App Center Test, a service that performs automated UI tests on hundreds of configurations across thousands of real devices. Since the tests are automated, you run exactly the same test every time, so you can identify performance and UX deviations right away, with every build. Tests produce screenshots or videos alongside performance data, so anyone can spot issues, and devs can click down into the detailed logs and start fixing right away. You can spot-check your code by testing on a few devices with every commit, then run regressions on hundreds of devices to verify that everything w[...]

Get a lifetime of online privacy with VPN Unlimited for under $45

Tue, 14 Nov 2017 23:16:19 +0000

[caption id="attachment_161448" align="alignnone" width="1000"](image) VPN Unlimited [/caption]

If you've ever connected to public Wi-Fi, chances are your data and browsing activity were unencrypted and vulnerable to falling in the wrong hands. With the increasing scale and frequency of cyber attacks, it's more important than ever to protect your online privacy. VPN Unlimited is a VPN (virtual private network) service that routes your connection through remote servers, masking your online activity and letting you access geo-blocked content from anywhere in the world. A standard subscription costs $9 USD / month, but right now SitePoint users can get a lifetime account for just $42.50

Continue reading %Get a lifetime of online privacy with VPN Unlimited for under $45%