Subscribe: Planet PHP
http://www.planet-php.org/rss/
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
composer  configuration  file  flex  fpm  make  memory  mod  mpm  project  queries  read  server  silex  software  symfony 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet PHP

Planet PHP



People blogging about PHP



 



How to Optimize Docker-based CI Runners with Shared Package Caches - SitePoint PHP

Tue, 21 Nov 2017 17:00:28 +0000

At Unleashed Technologies we use Gitlab CI with Docker runners for our continuous integration testing. We've put significant effort into speeding up the build execution speeds. One of the optimizations we made was to share a cache volume across all the CI jobs, allowing them to share files like package download caches.

Continue reading %How to Optimize Docker-based CI Runners with Shared Package Caches%







Symfony 4: An Update on Flex - Fabien Potencier

Tue, 21 Nov 2017 00:00:00 +0000

Symfony 4 is just around the corner. And Symfony Flex is one of the main selling points for the upgrade. Developers love the new philosophy. And a lot of changes happened since my last blog post. Let me recap the recent changes that you might not be aware of. Most of these changes were prompted by feedback from early adopters.

Using recipes from the contrib repository has became easier. The first time you require a package for which a contrib recipe exists, Flex now asks you the permission to execute the recipe. It also lets you switch the "accept contrib recipes" flag for the project (so it does not ask the question again). Much better experience, and better discoverability.

One issue people have had a lot is recipes installed multiple-times. I thought Flex got you covered except under some rare circumstances, but apparently those rare conditions are a bit more frequent than I anticipated. So, instead of trying to make clever hooks into the Composer life-cycle to determine if a recipe should be applied or not, each project now has a symfony.lock file where Flex's state is stored (this file must be committed in the project's code repository). Problem solved. And that lock file opens up some nice opportunities... but that's for another iteration.

Makefile support proves to be a nightmare. Despite several patches to the recipes having some Makefile support (support for Windows and whatnot), it was a never ending battle. At the end of the day, we realize that people had wrong expectations about the Makefile, and that make is not so standard. Instead of fighting the tool, we decided to add symfony/console as a requirement. Done. Simple. Efficient. And yes, Makefile support was introduced just as a facade to the optional Symfony Console component. You can still use a Makefile for your project of course. And I still think that this is a good idea.

The minimum version supported by Flex is now PHP 7.0 instead of PHP 7.1 previously to let more people use Flex. It's especially important for projects using Symfony 3.4, which is a long term support release. That will make the upgrade path to Symfony 4 smoother as well.

Having beta testers helped find bugs you would never have expected. When upgrading Flex, people had weird issues. That was because of how Composer plugins work. Some Flex classes are loaded early on by Composer. And if down the road, Flex is updated, you can have a situtation where a class from the newest version is mixed with classes from the old version. Not a good situation to be in. Now, Flex loads all its classes preemptively to be sure it is always in a consistent state.

In terms of features, Flex is now able to auto-register almost any bundle for which there are no recipes. And we added support for environment variables in PHPUnit configuration. Using Flex is also much faster as there are less roundtrips with the server (only one request per Composer session most of the time).

Regarding documentation, we have also started to document the new workflow with updates to the updgrade tutorial and the best practices document. They now take into account changes introduced by Flex. However, keep in mind that documentation updates are still on-going.

Last, but not least, the Symfony Flex Server has a new UI, which allows you to better discover available recipes, aliases, and packs.

Happy Flex!

(image)



How to Optimize SQL Queries for Faster Sites - SitePoint PHP

Mon, 20 Nov 2017 17:00:57 +0000

This article was originally published on the Delicious Brains blog, and is republished here with permission. You know that a fast site == happier users, improved ranking from Google, and increased conversions. Maybe you even think your WordPress site is as fast as it can be: you've looked at site performance, from the best practices of setting up a server, to troubleshooting slow code, and offloading your images to a CDN, but is that everything? With dynamic, database-driven websites like WordPress, you might still have one problem on your hands: database queries slowing down your site. In this post, I’ll take you through how to identify the queries causing bottlenecks, how to understand the problems with them, along with quick fixes and other approaches to speed things up. I’ll be using an actual query we recently tackled that was slowing things down on the customer portal of deliciousbrains.com. Identification The first step in fixing slow SQL queries is to find them. Ashley has sung the praises of the debugging plugin Query Monitor on the blog before, and it’s the database queries feature of the plugin that really makes it an invaluable tool for identifying slow SQL queries. The plugin reports on all the database queries executed during the page request. It allows you to filter them by the code or component (the plugin, theme or WordPress core) calling them, and highlights duplicate and slow queries: If you don’t want to install a debugging plugin on a production site (maybe you’re worried about adding some performance overhead) you can opt to turn on the MySQL Slow Query Log, which logs all queries that take a certain amount of time to execute. This is relatively simple to configure and set up where to log the queries to. As this is a server-level tweak, the performance hit will be less that a debugging plugin on the site, but should be turned off when not using it. Understanding Once you have found an expensive query that you want to improve, the next step is to try to understand what is making the query slow. Recently during development to our site, we found a query that was taking around 8 seconds to execute! SELECT l.key_id, l.order_id, l.activation_email, l.licence_key, l.software_product_id, l.software_version, l.activations_limit, l.created, l.renewal_type, l.renewal_id, l.exempt_domain, s.next_payment_date, s.status, pm2.post_id AS 'product_id', pm.meta_value AS 'user_id' FROM oiz6q8a_woocommerce_software_licences l INNER JOIN oiz6q8a_woocommerce_software_subscriptions s ON s.key_id = l.key_id INNER JOIN oiz6q8a_posts p ON p.ID = l.order_id INNER JOIN oiz6q8a_postmeta pm ON pm.post_id = p.ID AND pm.meta_key = '_customer_user' INNER JOIN oiz6q8a_postmeta pm2 ON pm2.meta_key = '_software_product_id' AND pm2.meta_value = l.software_product_id WHERE p.post_type = 'shop_order' AND pm.meta_value = 279 ORDER BY s.next_payment_date We use WooCommerce and a customized version of the WooCommerce Software Subscriptions plugin to run our plugins store. The purpose of this query is to get all subscriptions for a customer where we know their customer number. WooCommerce has a somewhat complex data model, in that even though an order is stored as a custom post type, the id of the customer (for stores where each customer gets a WordPress user created for them) is not stored as the post_author, but instead as a piece of post meta data. There are also a couple of joins to custom tables created by the software subscriptions plugin. Let’s dive in to understand the query more. MySQL is your Friend MySQL has a handy statement DESCRIBE which can be used to outTruncated by Planet PHP, read more at the original (another 4509 bytes)[...]



How To Setup Own Bitcoin Simulation Network - Piotr Pasich

Mon, 20 Nov 2017 07:00:44 +0000

(image) Paying for goods with Bitcoin is becoming more and more popular. If you run, maintain or develop an e-commerce website, ICO project, exchange, or trading system, you may be interested in a quick way of integrating with the Bitcoin blockchain. This article will show one of many ways of installing a Bitcoin mining software and […]



Silex is (almost) dead, long live my-lex - Stefan Koopmanschap

Fri, 17 Nov 2017 10:30:00 +0000

SymfonyCon is happening in Cluj and on Thursday the keynote by Fabien Potencier announced some important changes. One of the most important announcements was the EOL of Silex in 2018. EOL next year for Silex! #SymfonyCon ( -@gbtekkie) Silex Silex has been and is still an important player in the PHP ecosystem. It has played an extremely important role in the Symfony ecosystem as it showed many Symfony developers that there was more than just the full Symfony stack. It was also one of the first microframeworks that showed the PHP community the power of working with individual components, and how you can glue those together to make an extremely powerful foundation to build upon which includes most of the best practices. Why EOL? Now I wasn't at the keynote so I can only guess to the reasons, but it does make sense to me. When Silex was released the whole concept of taking individual components to build a microframework was pretty new to PHP developers. The PHP component ecosystem was a lot more limited as well. A huge group of PHP developers was used to working with full stack frameworks, so building your own framework (even with components) was by many deemed to be reinventing the wheel. Fastforward to 2017 and a lot of PHP developers are by now used to individual components. Silex has little to prove on that topic. And with Composer being a stable, proven tool, the PHP component ecosystem growing every day and now the introduction of Symfony Flex to easily setup and manage projects maintaining a seperate microframework based on Symfony components is just an overhead. Using either Composer or Symfony Flex, you can set up a project similar to an empty Silex project in a matter of minutes. Constructicons I have been a happy user of Composer with individual components for a while now. One of my first projects with individual components even turned into a conference talk. I'll update the talk soon, as I have since found a slightly better structure, and if I can make the time for it, I'll also write something about this new and improved structure. I've used it for a couple of projects now and I'm quite happy with this structure. I also still have to play with Symfony Flex. It looks really promising and I can't wait to give it a try. So the "my-lex" in the title, what is that about? It is about the choice you now have. You can basically build your own Silex using either Composer and components or Symfony Flex. I would've laughed hard a couple of years ago if you'd said to me that I would say this but: Build your own framework! Is Silex being EOL'ed a bad thing? No. While it is sad to see such an important project go I think by now the Symfony and PHP ecosystems have already gone past the point of needing Silex. Does this mean we don't need microframeworks anymore? I won't say that, but with Slim still going strong the loss of Silex isn't all that bad. And with Composer, Flex and the huge amount of PHP components, you can always build a microframework that suits your specific needs. The only situation where Silex stopping is an issue is for open source projects such as Bolt (who already anticipated this that are based on Silex, as well as of course your personal or business projects based on Silex. While this software will keep on working, you won't get new updates of the core of those projects, so eventually you'll have to put in effort to rewrite it to something else. [...]



Fedora 27: changes in httpd and php - Remi Collet

Fri, 17 Nov 2017 08:42:00 +0000

The Apache HTTP server and PHP configuration have changed in Fedora 27, here is some explanations. 1. Switch of the Apache HTTP server in event mode Since the first days of the distribution, the severs use the prefork MPM. For obvious performance reasons, we choose to follow the upstream project recommandations and to use the event MPM by default. This change is also required to have the full benefit and feature of the HTTP/2 protocol via mod_http2. 2. The problem of mod_php The mod_php module is only supported when the prefork MPM is used In the PHP documentation, we can read: Warning We do not recommend using a threaded MPM in production with Apache 2. And, indeed, we already have some bug reports about crashes in this configuration. So it doesn't make sense to keep mod_php by default. Furthermore, this module have some annoying limitations: integrated in the web server, it shares its memory, which may have some negative security impacts a single version can be loaded 3. Using FastCGI For many years, we are working to make the PHP execution as much flexible as possible, using various combinations, without configuration change: httpd + mod_php httpd + php-fpm (when mod_php is disabled or missing and with a running php-fpm server) nginx + php-fpm The FPM way have become the default recommend configuration for a safe PHP execution: support of multiple web servers (httpd, nginx, lighttpd) frontend isolation for security multiple backends micro-services architecture containers (docker) multiple versions of PHP 4. FPM by default Since Fedora 27, mod_php ZTS (multi-threaded) is still provided, but disabled, so FastCGI is now used by default. To not break existing configuration during the distribution upgrade, and to have a working server after installation, we choose to implement some solutions, probably temporarily: the php package have a optional dependency on the php-fpm package, so it is now installed by default the httpd service have a dependency on the php-fpm service, so it is started automatically 5. Known issues 5.1. Configuration change After a configuration change, or after a new extension installation, it is now required to restart the php-fpm service. 5.2. Configuration files With mod_php, it is common to to use the php_value or php_flag directives in the Apache HTTP server configuration or in some .htaccess file. It is now required to use the php_value or php_flag directives in the FPM pool configuration file, or to use some .user.ini file in the application directory. 6. Switching back to mod_php If you really want to keep using (temporarily) mod_php, this is still possible, either way: Switch back to prefork MPM in the /etc/httpd/conf.modules.d/00-mpm.conf file LoadModule mpm_prefork_module modules/mod_mpm_prefork.so #LoadModule mpm_worker_module modules/mod_mpm_worker.so #LoadModule mpm_event_module modules/mod_mpm_event.so Enable the module in the /etc/httpd/conf.modules.d/15-php.conf file. Warning, this configuration will not be supported, no bug report will be accepted. # ZTS module is not supported, so FPM is preferred LoadModule php7_module modules/libphp7-zts.so After this change, the php-fpm package can be removed. 7. Conclusion Fedora 27 now uses a modern configuration, matching the upstream projects recommendations. Security and performance are improved. Any change may raise some small issues, and lot of gnashing of teeth, but we will try to take care of any difficulties, and to improve what must be in the next updates, or in the next fedora versions. I plan to update this entry according to feedback.[...]



How to Read Big Files with PHP (Without Killing Your Server) - SitePoint PHP

Thu, 16 Nov 2017 18:00:05 +0000

It’s not often that we, as PHP developers, need to worry about memory management. The PHP engine does a stellar job of cleaning up after us, and the web server model of short-lived execution contexts means even the sloppiest code has no long-lasting effects. There are rare times when we may need to step outside of this comfortable boundary --- like when we're trying to run Composer for a large project on the smallest VPS we can create, or when we need to read large files on an equally small server. It’s the latter problem we'll look at in this tutorial. The code for this tutorial can be found on GitHub. Measuring Success The only way to be sure we’re making any improvement to our code is to measure a bad situation and then compare that measurement to another after we’ve applied our fix. In other words, unless we know how much a “solution” helps us (if at all), we can’t know if it really is a solution or not. There are two metrics we can care about. The first is CPU usage. How fast or slow is the process we want to work on? The second is memory usage. How much memory does the script take to execute? These are often inversely proportional --- meaning that we can offload memory usage at the cost of CPU usage, and vice versa. In an asynchronous execution model (like with multi-process or multi-threaded PHP applications), both CPU and memory usage are important considerations. In traditional PHP architecture, these generally become a problem when either one reaches the limits of the server. It's impractical to measure CPU usage inside PHP. If that’s the area you want to focus on, consider using something like top, on Ubuntu or macOS. For Windows, consider using the Linux Subsystem, so you can use top in Ubuntu. For the purposes of this tutorial, we’re going to measure memory usage. We’ll look at how much memory is used in “traditional” scripts. We’ll implement a couple of optimization strategies and measure those too. In the end, I want you to be able to make an educated choice. The methods we’ll use to see how much memory is used are: // formatBytes is taken from the php.net documentation memory_get_peak_usage(); function formatBytes($bytes, $precision = 2) { $units = array("b", "kb", "mb", "gb", "tb"); $bytes = max($bytes, 0); $pow = floor(($bytes ? log($bytes) : 0) / log(1024)); $pow = min($pow, count($units) - 1); $bytes /= (1 << (10 * $pow)); return round($bytes, $precision) . " " . $units[$pow]; } We’ll use these functions at the end of our scripts, so we can see which script uses the most memory at one time. What Are Our Options? There are many approaches we could take to read files efficiently. But there are also two likely scenarios in which we could use them. We could want to read and process data all at the same time, outputting the processed data or performing other actions based on what we read. We could also want to transform a stream of data without ever really needing access to the data. Let’s imagine, for the first scenario, that we want to be able to read a file and create separate queued processing jobs every 10,000 lines. We’d need to keep at least 10,000 lines in memory, and pass them along to the queued job manager (whatever form that may take). For the second scenario, let’s imagine we want to compress the contents of a particularly large API response. We don’t care what it says, but we need to make sure it’s backed up in a compressed form. In both scenarios, we need to read large files. In the first, we need to know what the data is. In the second, we don’t care what the data is. Let’s explore these options… Reading Files, Line By Line There are many functions for working with files. Let’s combine a few into a naive file reader: // from memory.php function formatBytes($bytes, $precision = 2) { $units = array("b", "kb", "mb", [...]



Extending ReactPHP's Child Processes Part Two - Cees-Jan Kiewiet

Thu, 16 Nov 2017 00:00:00 +0000

react/child-process is very flexible and can work a lot of ways but sometimes you don't want to be bothered with the details of how it works and just want a simpler API to do that.