Subscribe: Planet PHP
Added By: Feedage Forager Feedage Grade B rated
Language: English
application version  application  code complexity  code coverage  code  complexity  composer  coverage  scripts  test  tests  version 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Planet PHP

Planet PHP

People blogging about PHP


How to Use Generators to Beat Memory Bloat - Nomad PHP

Fri, 19 Jan 2018 05:05:35 +0000

April - US
Presented By

Korvin Szanto
April 19, 2018
20:00 CDT

The post How to Use Generators to Beat Memory Bloat appeared first on Nomad PHP.

Keep an eye on the churn; finding legacy code monsters - Matthias Noback

Thu, 18 Jan 2018 11:15:00 +0000

Setting the stage: Code complexity Code complexity often gets measured by calculating the Cyclomatic Complexity per unit of code. The number can be calculated by taking all the branches of the code into consideration. Code complexity is an indicator for several things: How hard it is to understand a piece of code; a high number indicates many branches in the code. When reading the code, a programmer has to keep track of all those branches, in order to understand all the different ways in which the code can work. How hard it is to test that piece of code; a high number indicates many branches in the code, and in order to fully test the piece of code, all those branches need to be covered separately. In both cases, high code complexity is a really bad thing. So, in general, we always strive for low code complexity. Unfortunately, many projects that you'll inherit ("legacy projects"), will contain code that has high code complexity, and no tests. A common hypothesis is that a high code complexity arises from a lack of tests. At the same time, it's really hard to write tests for code with high complexity, so this is a situation that is really hard to get out. If you're curious about cyclometic complexity for your PHP project, phploc gives you a nice summary, e.g. Complexity Cyclomatic Complexity / LLOC 0.30 Cyclomatic Complexity / Number of Methods 3.03 Setting the stage: Churn Code complexity doesn't always have to be a big problem. If a class has high code complexity, but you never have to touch it, there's no problem at all. Just leave it there. It works, you just don't touch it. Of course it's not ideal; you'd like to feel free to touch any code in your code base. But since you don't have to, there's no real maintainability issue. This bad class doesn't cost you much. What's really dangerous for a project is when a class with a high code complexity often needs to be modified. Every change will be dangerous. There are no tests, so there's no safety net against regressions. Having no tests, also means that the behavior of the class hasn't been specified anywhere. This makes it hard to guess if a piece of weird code is "a bug or a feature". Some client of this code may be relying on the behavior. In short, it will be dangerous to modify things, and really hard to fix things. That means, touching this class is costly. It takes you hours to read the code, figure out the right places to make the change, verify that the application didn't break in unexpected ways, etc. Michael Feathers introduced the word "churn" for change rate of files in a project. Churn gets its own number, just like code complexity. Given the assumption that each class is defined in its own file, and that every time a class changes, you create a new commit for it in your version control software, a convenient way to quantify "churn" for a class would be to simply count the number of commits that touch the file containing it. Thinking about version control: there is much more valuable information that you could derive from your project's history. Take a look at the book "Your Code as a Crime Scene" by Adam Tornhill for more ideas and suggestions. Code insight Combining these two metrics, code complexity and churn, we could get some interesting insights about our code base. We could easily spot the classes/files which often need to be modified, but are also very hard to modify, because they have a high code complexity. Since the metrics themselves are very easy to calculate (there's tooling for both), it should be easy to do this. Since tooling would have to be specific for the programming language and version control system used, I can't point you to something that always works, but for PHP and Git, there's churn-php. You can install it as a Composer dependency in your project, or run it in a Docker container. It will show you a table of classes with high churn and high complexity, e.g. +-----------------------+---------------+------------+-------+ | File | Times Ch[...]

grauphel available in Nextcloud app store - Christian Weiske

Wed, 17 Jan 2018 20:58:38 +0000

I finally got around publishing grauphel on the Nextcloud app store: grauphel @ Nextcloud.

This means you can install the Tomboy note syncing server on your Nextcloud instance via the normal apps menu now.

I have no plans to publish it in the ownCloud app store, nor do I plan to do fixes for ownCloud compatibility.

Automated Testing for PHP training course - Rob Allen

Wed, 17 Jan 2018 11:03:03 +0000


I'm delighted to announce my new venture, PHP Training, with my friend Gary Hockin. As you can probably guess from the name, PHP Training is a training organisation where we provide public training courses on topics related to PHP.

These courses will be held in person, initially at various venues in the UK, and are taught by Gary and myself. Both of us are experienced trainers and this is the first time we've offered training courses where individuals and small teams can attend.

Our first course will be held in Bristol on 13-14 March 2018 and concentrates on teaching you how to test your PHP applications:

Automatically testing your applications is incredibly important when you're developing professional, resilient apps — and working with PHP is no exception. Knowing how and where to start when you've decided to start automated testing can be intimidating, so this two-day course in Bristol will cover (from scratch) everything you'll need to know to start testing your applications with industry standard tools such as PHPUnit. We'll include how to start testing (and why you should) and look at Unit, Integration and Acceptance testing, along with how you can automate the running of your test suite using continuous integration software for an even higher level of confidence.

The course is suitable for complete beginners as well as those with a little experience in writing automated tests and is delivered by Rob Allen and Gary Hockin; two industry recognised experts who have given courses around the world. You'll be part of a tiny group to make sure that the quality of the lessons is the highest possible, and lunch on both days will be provided.

I'm confident that you'll benefit from this course – buy a ticket now!

I also really like our logo!


Using Laravel Collections Outside Laravel - Nomad PHP

Wed, 17 Jan 2018 00:26:45 +0000

Speaker: Oliver Davies @opdavies Laravel Collections are a powerful object-orientated way of interacting with PHP arrays, but did you know that they can be used outside of Laravel, in any PHP project? This short talk shows how we can use Composer to include Laravel Collections within a non-Laravel project and put them to use within …

The post Using Laravel Collections Outside Laravel appeared first on Nomad PHP.

Don’t write useless unit tests - Brandon Savage

Tue, 16 Jan 2018 14:00:40 +0000

The other day I came across the following code in a project: And the following was a unit test written to test this bit of code: Note that I have omitted the rest of the User class, as well as the Users array that is returned in the test. This test will in fact provide […]

The post Don’t write useless unit tests appeared first on

Documenting Composer scripts - Raphael Stolt

Mon, 15 Jan 2018 14:31:00 +0000

For open source projects I'm involved with, I developed the habit to define and document the steady growing amount of repository and build utilities via Composer scripts. Having Composer scripts available makes it trivial to define aliases or shortcuts for complex and hard to remember CLI calls. It also lowers the barrier for contributors to start using these tools while helping out with fixing bugs or providing new features. Finally they're also simplifying build scripts by stashing away complexity. Defining Composer scriptsIf you've already defined or worked with Composer scripts or even their npm equivalents you can skip this section, otherwise the next code snippet allows you to study how to define these. The here defined Composer scripts range from simple CLI commands with set options (e.g. the test-with-coverage script) to more complex build utility tools (i.e. the application-version-guard script) which are extracted into specific CLI commands to avoid cluttering up the composer.json or even the .travis.yml.composer.json{ "scripts": { "test": "phpunit","test-with-coverage": "phpunit --coverage-html coverage-reports", "cs-fix": "php-cs-fixer fix . -vv || true", "cs-lint": "php-cs-fixer fix --diff --stop-on-violation --verbose --dry-run", "configure-commit-template": "git config --add commit.template .gitmessage","application-version-guard": "php bin/application-version --verify-tag-match" }}Describing Composer scriptsSince Composer 1.6.0 it's possible to set custom script descriptions via the scripts-descriptions element like shown next. It's to point out here that the name of a description has to match the name of a defined custom Composer script to be recognised at runtime. On another note it's to mention that the description should be worded in simple present to align with the other Composer command descriptions.composer.json{ "scripts-descriptions": { "test": "Runs all tests.", "test-with-coverage": "Runs all tests and measures code coverage.", "cs-fix": "Fixes coding standard violations.", "cs-lint": "Checks for coding standard violations.","configure-commit-template": "Configures a local commit message template.", "application-version-guard": "Checks that the application version matches the given Git tag." }, "scripts": { "test": "phpunit", "test-with-coverage": "phpunit --coverage-html coverage-reports", "cs-fix": "php-cs-fixer fix . -vv || true", "cs-lint": "php-cs-fixer fix --diff --stop-on-violation --verbose --dry-run","configure-commit-template": "git config --add commit.template .gitmessage", "application-version-guard": "php bin/application-version --verify-tag-match" }}Now when running $ composer via the terminal the descriptions of defined custom scripts will show up sorted in into the list of available commands, which makes it very hard to spot the Composer scripts of the package at hand. Luckily Composer scripts can also be namespaced.Namespacing Composer scriptsTo namespace (i.e. some-namespace) the custom Composer scripts for any given package define the script names with a namespace prefix as shown next. As the chances are very high that you will be using the one or other Composer script several times, while working on the package, it's recommended to use a short namespace like in the range from two to four characters.composer.json{ "scripts": { "some-namespace:test": "phpunit", "some-namespace:test-with-coverage": "phpunit --coverage-html coverage-reports", "some-namespace:cs-fix": "php-cs-fixer fix . -vv || true", "some-namespace:cs-lint": "php-cs-fixer fix --diff --stop-on-violation --verbose --dry-run", "some-namespace:configure-commit-template": "git config --add commit.template .gitmessage", "some-namespace:application-version-guard": "php bin/application-version --verify-tag-match" }}Now this time when running Truncated by Planet PHP, read more at the origina[...]

Simple CQRS - reduce coupling, allow the model(s) to evolve - Matthias Noback

Fri, 12 Jan 2018 10:20:00 +0000

CQRS - not a complicated thing CQRS has some reputation issues. Mainly, people will feel that it's too complicated to apply in their current projects. It will often be considered over-engineering. I think CQRS is simply misunderstood, which is the reason many people will not choose it as a design technique. One of the common misconceptions is that CQRS always goes together with event sourcing, which is indeed more costly and risky to implement. CQRS alone simply means that you're making a distinction between a model that is used for changing state, and a model that is used for querying state. In fact, there's often one model that accepts "write" operations (called "write model" or "command model") and multiple models that can be used to "read" information from (called "read models", or "query models"). Most projects out there don't use CQRS, since they combine the write and read operations in one model. Whether you use the data mapper or active record pattern, you'll often have one object ("entity") for each domain concept. This object can be created, it has methods that allow for state modifications, and it has methods that will give you information about the object's state. All the legacy projects I've encountered so far use this style of storing and retrieving state. It comes with a certain programming style that is not quite beneficial for the maintainability of the application. When writing a new feature for such an application you will start with getting all the ingredients in place. You need some information, and you need some dependencies. If you're unlucky, you still fetch your dependencies from some global static place like good old Zend_Registry or sfContext. Equally bad, you'll fetch your information from the central database. This database contains tables with dozens of columns. It's the single source of truth. "Where is this piece of information? Ah, in table XYZ". So now you use the ORM to fetch a record and automatically turn it into a useful entity for you. Except... the entity you get isn't useful at all, since it doesn't give you the information you need — it doesn't answer your specific question. It gives you either not enough or way too much information. Sometimes your entity comes with a nifty feature to load more entities (e.g. XYZ->getABCs()), which may help you collect some more information. But that will issue another database query, and again, will load not enough or way more than you need. And so on, and so on. Reduce the level of coupling You should realize that by fetching all this information, you're introducing coupling issues to your code. By loading all these classes, by using all these methods, by relying on all these fields, you're increasing the contact surface of your code with the rest of the application. It's really the same issue as with fetching dependencies, instead of having them injected. You're reaching out, and you start relying on parts of the application you shouldn't even be worrying about. These coupling issues will fly back to you in a couple of years, when you want to replace or upgrade dependencies, and have to make changes everywhere. Start out with dependency injection and this will be much easier for you. The same goes for your model. If multiple parts of the application start relying on these entities, it will be more and more difficult to change them, to let the model evolve. This is reminiscent of the "Stable dependencies principle" (one of the "Package design principles"): if a lot of packages depend on a package, this package becomes hard to change, because a change will break all those dependents. If multiple clients depend on a number of entities, these will be very hard to change too, because you will break all the clients. Being hard to change is not good for your model. The model should be adaptable in the first place, since it represents a core concept of[...]

A rant about best practices - Stefan Koopmanschap

Wed, 10 Jan 2018 07:30:00 +0000

I have yet to talk to a developer that has told me that they were purposefully writing bad software. I think this is something that is part of being a developer, that you write software that is as good as you can possibly make it within the constraints that you have. In our effort to write the Best Software Ever (TM) we read up on all the programming best practices: design patterns, refactoring and rewriting code, new concepts such as Domain-Driven Design and CQRS, all the latest frameworks and of course we test our code until we have a decent code coverage and we sit together with our teammates to do pair programming. And that's great. It is. But it isn't. In my lightning talk for the PHPAmersfoort meetup on Tuesday, January 9th, 2018, I ranted a bit about best practices. In this blog post, I try to summarize what I ranted about. Test Coverage Test coverage is great! It is a great tool to measure how much of our code is being touched by unit (and possibly integration) tests. A lot of developers I talk to tell me that they strive to get 100% code coverage, 80% code coverage, 50% code coverage or any other arbitrary percentage. What they don't mention is whether or not they actually look at what they are testing. Over the years I have encountered so many unit tests that were not actually testing anything. They were written for a sole purpose: To make sure that all the lines in the code were "green", were covered by unit tests. And that is useless. Completely useless. You get a false sense of security if you work like this. There are many ways of keeping track of whether your tests actually make sense. Recently I wrote about using docblocks for that purpose, but you can also use code coverage to help you write great tests. Generating code coverage can help you identify which parts of your code are not covered by tests. But instead of just writing a test to ensure the line turns green, you need to consider what that line of code stands for, what behavior it adds to your code. And you should write your tests to test that behavior, not just to add a green line and an extra 0.1% to your code coverage. Code coverage is an indication, not a proof of good tests. Domain-driven design DDD is a way of designing the code of your application based on the domain you're working in. It puts the actual use cases at the heart of your application and ensures that your code is structured in a way that makes sense to the context it is running in. Domain-Driven Design is a big hit in the programming world at the moment. These days you don't count anymore if you don't do DDD. And you shouldn't just know about DDD or try to apply it here and there, no: ALL YOUR CODES SHOULD BE DDD!1!1shift-one!!1! Now, don't get me wrong: There is a lot in DDD that makes way more sense than any approach I've used in the past, but just applying DDD on every bit of code you write does not make any sense. Doing things DDD is not that hard, but doing DDD right takes a lot of learning and a lot of effort. And for quite a few of the things that I've seen people want to use full-on DDD recently, I wonder whether it is worth the effort. So yes, dig into DDD, read the blue book if you want, read any book about it, all the blog post, and apply it where it makes sense. Go ahead! But don't overdo it. Frameworks I used to be a framework zealot. I was convinced that everyone should use frameworks, and everyone should use it all the time. For me it started with Mojavi, then Zend Framework and finally I settled on Symfony. To me, the approach and structure that Symfony gave me made so much sense that I started using Symfony for every project that I worked on. My first step would be to download (and later: install) Symfony. It made my life so much easier. Using a framework does make a lot of sense for a lot of situations. And I personally do not real[...]

What version of PHP should my package support? - Brandon Savage

Tue, 09 Jan 2018 14:00:03 +0000

Everybody likes “the new hotness.” Everyone loves a new car, or a new computer, or the state-of-the-art video gaming console. It’s why people camp out for days to get their hands on a new iPhone, when they could just buy one the next week off the shelf. People love to have the hot thing, right […]

The post What version of PHP should my package support? appeared first on