bookmark_borderWhy you should develop for latest, greatest

Developers sometimes choose not to use the latest available language features that would be appropriate to tackle a problem for fear of alienating users and collaborators. This is a bad habit and we should stop doing that. Part of the solution are transpilers. What are transpilers, where are they used and what is the benefit? Why should we consider transpiling all our code?

I cut this piece from an upcoming article that is way too long anyway. I made this new article by reusing and reshaping existing text for a new audience and frame. You are reading a new text that first was built as a part of another text. – Yes. This was transpiling: Rephrasing an input, including externally supplied, derivative or implicit facts about it to an output that generally expresses the same. Excuse me, what? Let me go into some details.

Transpiling: Saying the same but different

In software, transpilers are also known as source to source compilers. They take in a program written in one language and write out a roughly equivalent program for another language. The other language may be another version or dialect of the input language or something entirely different. Don’t be too critical about the words: transpilers are just like all other compilers. Source code is machine-intelligible, otherwise it could not be compiled. Machine code is intelligible by humans, at least in principle.

Preprocessors are transpilers

A preprocessor is essentially a transpiler even if it does not interpret the program itself. The C language preprocessor is a mighty tool. It allows you to write placeholders that will be exchanged for code before the actual C compiler touches it. These placeholders may even have parameters or make the program include more code only if needed. Concatenating many source files into one and minifying these by stripping unnecessary whitespace can also be seen as a primitive form of transpiling.

Coding Style Fixers are transpilers

Automatic tools that edit your source code are transpilers. They might only exchange tabs for four space characters or make sure your curly braces are always in the same place or they may do much more involved stuff. For example php-cs-fixer transforms your technically correct code written in plain PHP into technically correct code in standards-conforming plain PHP. One such standard is PSR-2, later deprecated in favor of PSR-12 and PER-1 – these are all maintained by the PHP FIG. Software projects may define their own standards and configure tools to transpile existing code to conform to their evolving standards.

Compilers are transpilers

A compiler is a transpiler. It takes in the source code and builds a machine-executable artifact, the binary code. It might also build a byte code for some execution platform like Java’s JVM. It might build code for a relatively primitive intermediate language like CIL or a machine specific Assembly Language. Another compiler or an interpreter will be able to work with that to either run the software or turn it into a further optimized format. These transformations are potentially lossy.

Decompilers are transpilers

Earlier in life I used tools like SoftICE that would translate back from binary machine instructions to Assembly Language so that I could understand what exactly the machine is doing and make it do some unorthodox things. Compiling back from Machine Code to the machine-specific Assembly Language is technically possible and lossless but the result is not pretty.

Lost in translation

When humans rewrite a text for another target audience, they will remove remarks that are unintelligible or irrelevant to the new audience. They may also add things that were previously understood without saying or generally known in the former audience. Transpilers do the same. When they transpile for machine consumption, they remove what the machine has no interest in: Whitespace, comments, etc. They can also replace higher concept expressions by detailed instructions in lower concepts. Imagine I compiled a program from assembly language into binary machine code and then decompiled back to assembly language. Is it still the same program? Yes and no. It is still able to compile back into the same machine program. It does not look like the program I originally wrote. Any macro with a meaninful name was replaced by the equivalent step by step instructions without any guidance what their intention is. Any comments I wrote to help me or others navigate and reason about the code are lost in translation. The same is true anytime when we translate from a more expressive higher concept to a lower concept. Any implicit defaults I did not express now show up as deliberate and explicit artifacts or the other way around, depending on tool settings.

Lost in Translation but with Humans

You may know that from machine translated text. Put any non trivial text into a machine translator, translate it from English to Russian to Chinese to German and then back to English. In the best case, it still expresses the core concept. In the worst case it is complete garbage and misleading.

Another such thing are Controlled Languages like Simple English, Français fondamental, Leichte Sprache, etc. They use a reduced syntax with less options and variations and a smaller selection of words. Some like Aviation English or Seaspeak also try to reduce chance for fatal ambiguity or mishearing.

These reduced languages are supposedly helpful for those who cannot read very well, are still learning the language or have a learning disability. They may also enable speakers of a closely related foreign language to understand a text and they generally cater to machine translation. For those who easily navigate the full blown syntax and vocabulary and can cope with ambiguity and pun, simplified language can be repetitive, boring and an unnecessary burden to actual communication. Choosing a phrase or well known roughly fitting word over a less used but more precise word is an intellectual effort. Reading a known specific word can be easier on the brain than constructing a meaning from a group of more common words. Speaking to an expert in a language deliberately evading technical terms may have an unintended subtext. Speaking to a layman in lawyerese or technobabble might not only make it hard for them to understand you but also hard for them to like you. Readers will leave if I make this section any longer.

Useful application

Now that everybody is bored enough, let’s see why it is useful and how good it is.

Upgrading Code to newer language versions

You can use a transpiler to upgrade code to a newer version of the language. Why would you want that? Languages evolve. New features are added that allow you to write less and mean the same. Old expressions become ambiguous by new syntax and features. Keywords can be reserved that previously weren’t. Old features become deprecated and will finally stop working in later versions. A transpiler can rewrite your code in a way that it will run in the current and next version of a language. It can also move meta information from comments or annotations into actual language code.

    // Before
    /**
     * @readonly
     * @access private
     * @var FluffProviderInterface Tool that adds bovine output
     */
    var $fluffProvider;
    /**
     * Constructs an example
     * 
     * @access public
     *
     * @param IllustrableTopicInterface A topic to explain by example
     * @param bool $padWithFluff        Whether to make it longer than needed
     * @param int  $targetLength        How long to make the article
     *
     * @return string The Article
     */
    function constructExample($topic, $padWithFluff=true, $targetLength=3000)

Now that’s what I call a contrived example. Code might look like this if it was originally written in PHP 4 and later enhanced over the years, only using new expressiveness where needed. While it technically runs, it is not how we would possibly write it today.

    // After
    /**
     * @var FluffProviderInterface Tool that adds bovine output
     */
    private readonly FluffProviderInterface $fluffProvider;
    /**
     * Constructs an example
     * 
     * @param IllustrableTopicInterface A topic to explain by example
     * @param bool $padWithFluff        Whether to make it longer than needed
     * @param int  $targetLength        How long to make the article
     *
     * @return string The Article
     */
    public function constructExample(
        IllustrableTopicInterface $topic,
        bool $padWithFluff=true,
        int $targetLength=3000
    ): string

That could be the output of a transpiler. It takes meta information from controlled language in the comments and uses the advanced grammar of the improved PHP language to express them.
In other words, the upgraded code has turned instructions for the human or for external tools into instructions that the language can actually enforce at runtime. Before it helped you understand what to put in and what to expect out. Now it forbids you from putting in the wrong things and errors if the code tries to give back anything but text.

It may drop comments that are already expressed in the actual code. Some project standards suggest to drop @param and @return altogether to make the code more consise to read. I am a little conservative on this topic. A documentation block may be removed if it does not contain any guidance beyond the code. There is no need to rephrase “this is the constructor” or “The parameter of type integer with name $targetLength tells you how long it should be”. But sometimes things deserve explaining and sometimes the type annotations exceed what actual the language expresses. Intersection types are PHP 8.1+. PHP 8.2 can express “return this class or false but not true” while before the language only allowed “This class or a boolean (either true or false)”. Annotations can be read by tools to work with your code. As demonstrated, a transpiler can use them to rewrite your code to a more robust form. Static analyzers can detect type mismatch that can lead to all sorts of bugs and misbehaviours. Documentation generators can strip away the actual code and transform the comments and structural information into something you can easily navigate and reason about. Code including high concept and documentation is first and foremost for humans. Adapting it for machines often means dumbing it down.

Downgrading to an older platform or language version

Code can be transformed in the other direction, stripping or replacing advanced expressiveness to make the older runtime understand the code. This is very popular with Frontend Developers: Both Javascript and CSS are usually no longer shipped the way they are written. A variety of type safe and advanced languages exist that are not even intended to be run in their source form but compiled down to a more or less modern standard of JavaScript, then minified to the smallest valid representation. Possibly variable and function identifiers are changed to avoid them colliding between unrelated software loaded into the same browser. In other languages, we are used to develop against a target baseline and only use the features it provides, plus annotations for concepts it does not support. We choose the baseline by deciding on the lowest platform we want to or have to support. This is jolly insane and I mean it in a nice way.

Imagine we create a book for small children. We will first create a compelling story, lovely characters and possibly some educational tangent using our words and our thoughts, the level of abstraction we are fluent in and the tools we can handle. We finally take care to adapt wording, level of detail and difficult concepts to fit the desired product.
We don’t write to the agent, the publisher or the printing house in baby english. So why should we use anything less than our own development environment supports? It is not healthy. Outside very special situations or for the joy of it, we generally don’t work with one hand tied to the back, using antiquated tools and following outmoded standards.

This Catapult resembles the state of the art centuries ago. Shooting it is fun. For any other purpose it is the wrong tool.
This Catapult resembles the state of the art centuries ago. Shooting it is fun. For any other purpose it is the wrong tool.

If we cater to the lowest assumable set of capabilities at development time, we limit ourselves in a costly way. We cannot benefit from the latest and most convenient, i.e. effortless and reliable set of tools. We are slower than we could be, we will make more mistakes and it will exhaust us more than needed.

Provided our production pipeline from the development laptop or container to the CI are able to work with the latest tools, we can use them.

Deliver using a transpiler

The source branch should always target your development baseline, tools as modern as you can come by. Delivery artifacts, i.e. released versions, should deviate from the source distribution anyway:

  • Why should you ship build time dependencies with a release?
  • Why should you ship CI recipes or linter configurations with a release?
  • Depending on circumstances, shipping the unit tests might be useful or waste.
  • You would not normally ship your .git directory, would you?

Adding a transpiler step is just another item, just another reason. Transpiling to your lowest supported baseline is not really different from zipping a file, editing a version string or running a test suite to abort faulty builds before they ship. But still, it is not perfect. The shipped code will run on the oldest supported environment but it will miss many runtime benefits of newer versions. This is especially true if your library is a build time dependency of another project. In the best scenario, a build for a fairly recent but reasonable platform expectation exists and another build for an well-chosen older target exists. Both need to run through the test suite and ideally the older build will pass the test suite both when actually run on the old platform and when run on an upgraded platform. There are some details, edge cases and precautions needed to make this feasible and reliable. This will be detailed in an upcoming article which just shrank by a good portion.

bookmark_borderPhar out: horde/components in one file

tl;dr – I packaged horde/components as a single file for easy inclusion in build pipelines.

The horde/components commandline app is an important development tool. It lets you generate a composer.json file from the .horde.yml definition, helps with managing the changelog yml and provides a simple workflow utility which I use for release pipeline. Last year I added some capabilities for managing git branches and tags. It is very opinionated and I often think about generalizing some functionality to make it more useful beyond horde development.

But the tool is unwieldy. It installs 69 horde dependencies, 5 pear libraries and several more stuff, including a complete distribution of sabre/dav. This is an issue that will eventually get solved by slowly redesigning the dependency graphs. For now we have to live with it. But how? You don’t want to require-dev that beast into your working tree. Chances are it would blur what is really needed for the unit worked on and what is only needed to support the tool. It could also lead to library version conflicts.

So move it out of tree and run it as a separate root project, then symlink the commandline program to some path like /usr/local/bin? Yes, that did work but still two or three steps when there ought to be only one. I wished for a single file that I could easily install. The solution in php is a phar file, similar to a jar in java.

The documentation of phar consists of only a few pages detailing all the possible calls into phar, all the capabilities of the extension – but it left me completely in the dark how to actually use it. Looking at some CI pipelines of popular tools distributed via phar.io / phive gave some ideas but also did not bring the wanted outcome. The evening dragged on and on. Finally I found an old article by Matthew Weier O’Phinney. Back in 2015 he advised to use a helper (box-project/box) to build a phar. He pointed out what I already learned in a painful way: The official documentation won’t get me where I want fast.

His article also included some bits on distribution and signing which I will consider at a later stage and some bits specifically linked to Travis CI, back then the natural champion for CI needs in Github. Today Github promotes Github Actions as the preferred CI tool so I will have a look into this.

horde/components not only contains some internal workflows, it also used to have functionality to set up a jenkins CI environment. I haven’t used it in years and it is likely broken. I think I should rather implement functionality to call into a Github or Gitlab Hook, read metadata from the API, create artifacts like packages and release notes. Having the components tool in one ready to run file also makes it easy to get it into a CI. But we are not yet there.

My preliminary build is available via https://horde-satis.maintaina.com/horde-components – there are no versions yet, there is no nightly update and there is no signing yet. I am still figuring out which parts work as expected and which need further improvements. This build is based on the maintaina-com repository and differs in functionality from the official horde.org code base. I think this deviation is inevitable for the moment though I don’t like it much.

If you want to build and distribute your own phar component, read the original article by MWOP. I am very glad I found it – saved this evening’s success.

bookmark_borderHorde Installer: Return to the Vendor

TL;DR – horde/horde-installer-plugin will now install apps to vendor dir and then link to web dir

Apps in the web dir

Until today, the composer plugin for installing horde apps installed apps directly into the web dir and linked configs from outside the webdir into the apps. That had several drawbacks. Developers could not just traverse the vendor dir to jump between libraries and app plugins. That is a deviation from composer standards which install everything besides the root package into a two-level structure below /vendor directory.

Handle equal things equal

We already installed two other types to the vendor dir and only linked their contents to appropriate locations: horde-dir is about special libraries that expose javascript or other web readable content. themes packages are all about static assets. It felt quite natural to also move apps there. Every package is first handled equally and then their specific needs are addressed.

Move setup to post installation

Recently, a new composer command horde:reconfigure was added to trigger the reconfiguration mechanisms on demand. The latest changes prepare a next step, reusing the linking and post installation procedures in tools that don’t run as composer plugins. At one point, the plugin can become a useful but optional component. Without the plugin, composer only adds, removes or updates software and kills any modifications to vendor dir. In this case it is up to the administrator to run another tool that re-applies the necessary configurations. The ultimate goal is to require less and less such modifications in the vendor dir and also less fiddling with the web dir.

Side effect: Added security

The applications are now symlinked to the web dir, but selectively.
Documentation, bindir, admin scripts and unit tests are no longer available in the web dir, as well as some non-runtime files from the root dir.
This reduces the surface for attacks in case filtering mechanisms like htaccess files fail.

Unsolved: Deinstallation case

At the moment the installer does not properly tackle deinstallation of apps. It leaves a directory containing broken symlinks. This is to be solved before a tagged release of this new change can happen.

Backward Compatibility Implications

It is intended to be mostly backward compatible. Users are encouraged to call into vendor/bin instead of calling into the application directories directly – be it in their old or new locations. The components CLI application, mostly used by developers and Continuous Integration jobs, will no longer show up in the web dir at all. You SHOULD configure it through var/config like other apps and then run the CLI through vendor/bin or wherever your bindir is located. This was the preferred approach before. The components app only showed up in the web dir because it is classified as a horde-app.

bookmark_borderAuth Headaches

Back in the old days when rock musicians took the same drugs as your grandfather, authorisation and authentication might have been very simple. You had a user name, you had a password. Most likely you had one and the same password for each and everything. Congrats if you were smarter back then. Maybe your application had a notion of a super user flag or user levels. The more advanced had a permission system but who would ever need that? Well, there will only ever be the need for 5 computers, some researchers argued back in the old days, referring to an even older, albeit questioned, quote.

Today, authentication and authorization are much more complicated. People might still log into a system by user name and password. They might need a second factor like a One Time Password right away or later to perform advanced operations like committing orders. There might be elaborate permission or role-based systems restricting what the user can do on which resources. Users might not have a local password at all but a shadow identity linked to an authentication provider like Google or Github – who are the party assuring to the app that you are in fact the person you claim to be. In an enterprise context, devices might identify their users by SSL Certificates or bearer tokens. Finally the app might have long-lived remember-me cookies separate from their short-lived session tokens/session cookies. These might be bound to specific clients. Changing browsers may put you into a situation where login by user name and password will result in a more elaborate, email-based password.

And on a completely different level you might want to to authorize REST API access to entities either linked to a specific user account or to a specific outside service. Things got elaborate, things got complicated.

Basic Definition: Authentication

Authentication is the process of identifying who is dealing with the application.
This generally involves two orthogonal questions:

How is the authentication communicated?

This is usually achieved by presenting some evidence of authentication to the resource. Showing your passport, you might say. In a stateless API, the evidence is presented with each request. In classic HTTP Basic Authentication, the user first accesses a resource and the resource answers, “Authentication required”. Then the user agent (browser) presents the user a form to enter user name and password. The request against the resource is sent again, along with an authentication header containing the user and password in a transport-friendly envelope (BASE64 encoding) which provides no security by itself. The server will check this information and if it matches, grant access to the resource. The latter is actually authorization – see below. As far as authentication goes, presenting evidence and accepting it is the whole thing. More advanced systems may send a derivative of the actual authentication information. Digest Authentication sends a value computed from user name and password which the server knows or can check against something it knows. The server or any intruder can not deduce from that value what the actual password was. Another derivative mechanism is cookie or bearer token authentication. A new authentication credential is created, for example by sending user name and password to the server (or to a third party) only once. The credential is now sent along with each request to verify it’s you. You might need your passport or driver’s license to acquire a key to your hotel room but once you have it, the key is all you need to get in.

How authentication information is checked

The other major aspect is how the server side keeps the necessary information to verify authentication data. More simply put: How does the server know if your user name and password are legit? User name and password might be stored in a file. The password might better not be stored in the file but a derivative value like a computed hash. This way, if somebody steals the file, he will only know the users but cannot know the passwords. The password (or its hash) might be stored in a database or in an LDAP server. The credentials might be sent to an authentication API. In some cases, the server does not have to store any authentication data. This is true when the authentication data contains means to verify that it has been created by a trusted third party, is time-limited and has not been tampered with. Finally, the server might not care at all. A traditional chat service may receive your user name and create a session key. This key is used to understand who sent or asked for what. As long as you are logged in and keep communicating, no new session for this user can be created. Once you are out for long enough, the session expires and anybody can use the same name again. Having to deal with passwords may be an unwanted complexity. Authentication is identifying you by any (sufficient) means.

You know it’s drivin’ me wild – Confusion

Traditional systems have mixed emotions about their guardian angel. As said above, they may mix up knowing who asks (authentication) with knowing if they deserve to receive (authorization). They may also have a notion of an “authentication driver” which might emphasize one aspect over the other, assuming that it is either well-established or irrelevant how the password arrives at the server. New systems should have a clear understanding of both aspects and may link multiple combinations of both receiving and checking credentials to the same identity or user account.

Basic Definition: Authorization

Authorization is the process of deciding if a requester (who could be authenticated or anonymous) is authorized to interact with a resource or system. A concert hall or a renaissance faire may check your authorization to enter by a ticket, a stamp on the hand or ribbon, wrist band. They may not give an elaborate thought about who you are. Why should they care? At its core, a username/password authentication system is just checking by a password if you are authorized to identify as a certain user. It may be beneficial to tie a password to an identity. This identity may have permissions and other attached data which it will keep even when it changes its password. Other systems assign authorization to the token itself which is both the password and the identity. In this case, when the token expires, the identity will expire, too.

DAC: Discretionary Access Control and Permissions

Any system that discerns access by the identity of a user can be considered a DAC system. Permissions may be assigned to a user identity directly or to a named group. The users’ being part of the group can be verified through his identity, hence his access level. This can include special pseudy-groups like “all authenticated users” and “guests” or non-authenticated users. Most systems need to expose at least the means of authentication to a yet unauthenticated client. The horde/perms and horde/share libraries implement such a DAC system. Most DAC systems are cumulative: By being member of more groups, a user can only gain more privileges but not lose them. In practice, it might be easier to define a privilage and allow access if a user does NOT have it (and maybe have some other) rather than trying to work out how to allow negative privileges within the actual system. In a wider sense, countable limits like allowing to upload ten pictures or reading 5 articles per day can also be expressed in a DAC system.

MAC: Mandatory Access Control

Mandatory access control is an evolution from DAC in which acess is defined by policies. These allow or prevent a user from sharing a resource with a defined audience. There is little provision for individual exceptions.

RBAC: Role-Based Access Control

Role-Based Access Control systems combine previous concepts. Multiple permissions on resources are assigned to a role and subjects or identities are authorized for these roles. Who grants this authorization is not defined by the system – usually it is is the person with the role of “approver” on the specific “role” resource. A system may define that a user role is needed to even apply for further roles or application is not possible at all and roles are centrally assigned. Extended RBAC systems can model composed roles out of other roles. They may also define policies for mutually exclusive roles – A person may not apply for a role for which he is the approver or a person may not approve his own application for a role, even if he is allowed to apply for the role and has the authority to approve. A ticket system may ask a user if he is in the requester role or in the processor role and may access to different queues and commands based on that decision. In sports, you might be a player in one game or league and a referee in another but you are not allowed to combine both roles’ permissions during a game. This prevents undesirable situations. A system may ask the top administrator to choose if he is currently acting as the administrator or as a regular user and prevent him from mixing both types of access at once.

Two Factor Authentication and Weak Authentication

Many modern systems combine a primary authentication mechanism like username/password with additional aspects. A user may need to solve a captcha to gain the authority to enter his password.
A user may temporarily lose access to the login mechanism if the same IP address has tried to authenticate too many times within a time span. A user may be authenticated by a certificate or long living device cookie but needs to add password authentication or email verification before he has access to some functions, even if his user rank, role, permission, group membership or whatever is otherwise sufficient. One API call may be used in a UI scenario through a short lived session token and in an integration scenario using a separately scoped access token but not through a user / password combination. Finally there are One Time Password mechanisms which are only practical if they are limited either to specific requests like transferring money or are required periodically – like once every 24 hours. Keeping mechanisms nicely separated and combining requirements on a more abstract level is crucial. Trying to make a single mechanism powerful and flexible enough can end up making it overly complex and impractical to use. If you think of PSR-7 middlewares handling a HTTP request in PHP, a middleware’s job may be limited to fetching a credential from a header and calling into a backend or multiplexer. The result is stored back into the request as an extra attribute, leaving it to another middleware further down the line to process the result and implement consequences like an error message, a redirect to a login screen or determining which set of roles or permissions is whitelisted for this login type. By enabling or disabling middlewares for a specific request, complexity increases or decreases.

Challenging backend services

There is an obvious issue with scenarios in which multiple types of credentials may identify and authorize a user to access the system: In each case the system must be able to access its backend resources. This can be trivial for a global resource like a database accessed through a system wide application credential. It can be more tricky if you try to access a user-specific IMAP backend or an LDAP directory which has its own, completely separate notion of access control. There are several ways to tackle this but I will leave this to another article.

bookmark_borderExceptional Dependency Decency

Libraries can become less attractive to 3rd party integrators if they depend on too many unwanted other elements. This is especially true for libraries that are themselves pulled in as a dependency. Our horde/exception library is no exception to this. It is pulled in by almost all horde libraries because it is horde’s goto solution for exception hierarchies. A third party user of a library might not be particularly interested in horde’s custom exceptions. In Horde/Yaml: Graceful degradation I detailed why the yaml parser now works without its horde-native peers, including the horde/exception and horde/util libraries.

Dependency Hell: L’enfer, c’est les autres packages

Many Horde libraries started out as breakout parts from a monolithic framework. Drivers for various edge cases came to live in with their parents, interfaces, base classes, default implementations. Only rarely a very specialized driver was factored out into separate libraries, usually when it came as a late addition. As a result, the Horde 5 framework sometimes feels a bit like Linkedin: An item is likely related to most others by at most three or four intermediates. You invite one into your ecosystem and a sizeable portion of the whole clan arrives.

In the PEAR past, the cost of maintaining many micro-libraries and assembling a working installation from them was higher than today. Horde already contains about 140+ parts. Understanding the past is the first step.

Dependency Decency: Less is more

A necessary next step is to analyze the status quo and improve the future. Which dependencies are useful for the core business of a package? Which dependencies are only relevant to a specific sub class or driver? That member of the family maybe should have some private place to assemble its own friends, uncrowded by his relatives’ friends? A library is more likely to be adopted if you can consume the wanted aspects without having to live with anything unwanted.

Case Study: The horde/exception library

The horde/exception library is used in virtually all horde packages. It pulls in the horde/translation library, which is also used in most of the framework. This is no additional burden in the framework use case, but otherwise needs a second thought.

Should a library tightly couple its preferred translation mechanism?

Looking closer, there are only two translation keys, “Not Found” and “Permission Denied”. All other exceptions are translated at the level of the library or application that uses them or simply not at all. Maybe exceptions don’t want to be translated outside of the application calling context. In most places where an uncaught exception ought to be visible or logged, showing the English original is preferable or at least acceptable. Normal users should only ever see sanitized and scrubbed messages even in common error cases – and these should be translated and amended to be useful.

The NotFoundException is only translated if it is created without an input message, defaulting to “Not Found” (or translated equivalent). The same is true for PermissionDeniedException.
Besides that, I think we can provide more powerful and useful exception graphs by moving to interfaces and traits, away from inheritance.

“Not Found” by itself, in any language, is not very useful. What was not found? Where or why was it looked up? Who should be concerned? “Permission denied” – which permission is needed? What can I do about it?

Trying to be too nice sometimes results in being a burden

By translating early, we pull in a dependency when even the exception library itself is likely a pulled-in utility and not the first-class dependency conciously consumed. We also complicate translations by the mechanism of choice unless it happens to be horde itself. In many cases, this is not the best way to handle that. And even if we would provide substantial amount of translatable text with the library, we’d better offer the translations separately.

Promoting composition over inheritance: Do away with inheritable base classes.

The traditional model for horde library or app exceptions was to inherit from Horde_Exception or a few specialized other exceptions. There is an adapter for wrapping ancient PEAR_Error objects into proper exceptions and there is an adapter for consuming recoverable errors/warnings from fopen() and friends. Both become much less relevant in modern PHP. Apart from this, a library or app inherited from Horde_Exception or Horde_Exception_Wrapped and then may base its own specialized App_Exception_Foo on App_Exception, extends Horde_Exception. Unless, of course, a more appropriate builtin exception is thrown which will have no Horde specific interfaces at all. We can do better than that. We can provide the building blocks for libraries and applications to mark their relation to Horde or a specific sub system but still natively extend PHP’s most appropriate builtin.

Illustration

<?php

declare(strict_types=1);

namespace Horde\Exception;

// Import builtins to the namespace;
use Exception;
use LogicException;
use RuntimeException;
use Throwable;

/**
 * Base Throwable of all things Horde.
 * 
 * Quirk: Would even work without extending Throwable but that feels wrong.
 */
interface HordeThrowable extends Throwable 
{

}

/**
 * A library's or app's specific exception type
 */
interface DomainThrowable extends HordeThrowable 
{

}

/**
 * The most generic Horde\Exception
 */
class BaseException extends Exception implements HordeThrowable
{

}

/**
 * The most generic Horde\Exception for the "Domain" app or library
 */
class DomainException extends Exception implements DomainThrowable
{

}

/**
 * A Horde\Exception for the "Domain" app or library based on 
 * the builtin LogicException
 */
class DomainLogicException extends LogicException implements DomainThrowable
{

}

/**
 * Implements methods to digest an error_get_last() and either
 * throw an exception or return to normal code flow.
 */
trait WrapRecoverableErrorTrait
{
    ...
}

/**
 * The interface matching the implementing trait above
 */
interface WrapRecoverableErrorInterface
{
   ...
}

/**
 * Implements methods to wrap legacy a PEAR_Error class
 */
trait WrapPearErrorTrait
{
    ...
}

/**
 * A specific class for an app, library or subsystem with special 
 * methods to wrap a legacy warning or recoverable error. 
 * Could also extends any other PHP builtin class that matches more 
 * specifically the kind of error.
 */
class DomainIoErrorException extends RuntimeException
implements DomainThrowable, WrapRecoverableErrorInterface
{
    use WrapRecoverableErrorTrait;
}

try {
    /**
     * Some code does fopen() or similar which might result in a warning
     * The trait in DomainIoErrorException offers a static method that calls 
     * and evaluates error_get_last(). If it is OK, it just returns. 
     * Otherwise it throws an exception from the available data.
     */
    DomainIoErrorException::checkRecoverableErrors();
} catch (DomainIoErrorException $e) {
   // Offer a meaningful way to deal with failure to open the file 
   echo "caught IO error";
} catch (RuntimeError $e) {
   // This would have caught the DomainIoError, too.
} catch (HordeThrowable $e) {
   /* This catches almost anything horde-related. 
      You might want to log details and show a generic message to users */
    echo "caught generic horde error";
}

Refactor for elegance: Decomposing an inheritance graph into traits and interfaces.

The illustration above gives an idea how we can compose exceptions so that we have both at once: Use PHP builtin exceptions everybody knows and understands outside our ecosystem but also use interface hierarchies that help us to specifically handle exceptions from a subsystem. We also can amend any builtin exception with methods to add context, implicitly log issues or wrap legacy or foreign types of failure data. This allows for more elegant code that fits more nicely both into our own framework or third party use cases. This way the exception library only provides some very basic building blocks and leaves it to others to aggregate additional functionality as needed. It stays lean and free of scary dependencies. This way it can become a welcome guest in other ecosystems or at least one about which other integrators will not worry much. This goes both ways. A third party may contribute a single driver or plugin if there are good examples how to integrate them in the wider system.

As a side benefit, building a test environment becomes easier if your code does not need to be tested against too many different backends but everything is nicely separated.

But reality bites

In practice, we need to moderate our desire for change and improvement a little bit. While we should work towards eliminating the dependency on the Translation system, we should not break the existing behaviour without a transitional phase. There is a good and agreeable path towards it: We communicate our intention. We mark classes or calls that use translation as deprecated. We communicate what alternatives to use. We can even stub the translation system so it becomes optional – if it is not installed, the code simply won’t translate but otherwise behaves as expected.

We can provide the building blocks for a better exception system now and add our new base exception interface to the legacy classes. Consuming code can transition to checking against the interfaces without waiting for the throwing code to change all at once. In some cases that might mean that we need to delay introduction of return types. It is much less a problem with parameter signatures where inheriting classes always can accept a wider, less strictly defined set of inputs. By keeping the older interfaces around, we can make our more robust versions a matter of opt-in. We offer decent migration paths without withholding change altogether.

bookmark_borderHorde/Yaml: Graceful degradation

Tonight’s work was polishing maintaina’s version of horde/yaml. The test suite and the CI jobs now run successfully both on PHP 7.4 and PHP 8.1. The usual bits of upgrading, you might say.

However, I had my eye on horde/yaml for a reason. I wanted to use it as part of my improvements to the horde composer plugin. Composer famously has been rejecting reading any yaml files for roughly a decade so I need to roll my own yaml reader if I want to deal with horde’s changelog, project definition and a few other files. I wanted to keep the footprint small though, not install half a framework along with the installer utility.

You never walk alone – There is no singular in horde

The library used to pull in quite a zoo of horde friends and I wondered why exactly. The answer was quite surprising. There is no singular in horde. Almost none of the packages can be installed without at least one dependency. In detail, horde/yaml pulled in horde/util because it used exactly one static method in exactly one place. It turned out while that method is powerful and has many use cases, it was used in a way that resulted in a very simple call to a PHP builtin function. I decided whenever the library is not around I will directly call that function and lose whatever benefits the other library might grant over this. This pattern is called graceful degradation. If a feature is missing, deliver the next best available alternative rather than just give up and fail. The util library kept installing although the yaml parser no longer needed it. The parser still depended on the horde/exception package which in turn depended on horde/translation and a few other helpers. Finally horde/test also depended on horde/util. It was time to allow a way out. While all of these are installed in any horde centric use case, anybody who wants only a neat little yaml parser would be very unhappy about that dependency crowd.

Alternative exceptions

The library already used native PHP exceptions in many places but wrapped Horde exceptions for some more intricate cases. While this is all desirable, we can also do without it. If the horde/exception package is available, it will be used. Otherwise one of the builtin exceptions is raised instead. This required to update the test suite to make it run correctly either way. But what is the point if the test suite will install horde/util anyway?

Running tests without horde/test unless it is available

I noticed none of the tests really depended on horde/test functionality. Only some glue code for utilities like the horde/test runner or horde/components really did anything useful. I decided to change the bootstrap code so that it would not outright fail if horde/test was not around. Now the library can be tested by an external phpunit installation, phar or whatever. It does not even need a “composer install” run, only a “composer dump-autoload --dev” to build the autoloader file.

A standalone yaml parser

The final result is a horde/yaml that still provides all integrations when run together with its peer libraries but can be used as a standalone yaml parser if that is desirable. I hope this helps make the package more popular outside the horde context.

Lessons learned

Sometimes less is more. Factoring out aspects for reuse is good. Factoring out aspects into all-powerful utility libraries like “util”, “support” and the likes can glue an otherwise self contained piece of software together with too many other things. That makes them less attractive and harder to work with. Gracefully running nevertheless is one part. The other is redesigning said packages which cover too many aspects at once. This is a topic for another article in another night though.

bookmark_borderPHP: Tentative Return Types

PHP 8.1 has introduced tentative return types. This can make older code spit out warnings like mad.
Let’s examine what it means and how to deal with it.

PHP 8.1 Warnings that will become syntax errors by PHP 9

PHP 7.4 to PHP 8.1 have introduced a lot of parameter types and return types to builtin classes that previously did not have types in their signatures. This would make any class extending builtin classes or implementing builtin interface break for the new PHP versions if they did not have the return type specified and would create interesting breaks on older PHP versions.

Remember the Liskov Substitution Principle (LSP): Objects of a parent class can be replaced by objects of the child class. For this to work, several conditions must be met:

  • Return types must be covariant, meaning the same as the parent’s return type or a more specific sub type. If the parent class guarantees to return an iterable then the child class must guarantee an iterable or something more specific, i.e. an ArrayObject or a MyFooList (implements an iterable type).
  • Parameter types must be contravariant, meaning they must allow all parameters the parent would allow, and can possibly allow a wider set of inputs. The child class cannot un-allow anything the parent would accept.
  • Exceptions are often forgotten: Barbara Liskov‘s work implies that Exceptions thrown by a subtype must be the same type as exceptions of the parent type. This allows for child exceptions or wrapping unrelated exceptions into related types.
  • There are some more expectations on the behaviour and semantics of derived classes which usually are ignored by many novice and intermediate programmers and sadly also some senior architects.

Historically, PHP was very lax about any of these requirements. PHP 4 brought classes and some limited inheritance, PHP 5 brought private and protected methods and properties, a new type of constructor and some very limited type system for arrays and classes. PHP 7 and 8 brought union types, intersection types, return type declaration and primitive types (int, string) along with the strict mode. Each version introduced some more constraints on inheritance in the spirit of LSP and gave us the traits feature to keep us from abusing inheritance for language assisted copy/paste. Each version also came with some subtle exceptions from LSP rules to allow backward compatibility, at least for the time being.

In parallel to return types, a lot of internal classes have changed from returning bare PHP resources to actual classes. Library code usually hides these differences and can be upgraded to work with either, depending on which PHP version they run. However, libraries that extend internal classes rather than wrapping them are facing some issues.

PHP’s solution was to make the return type tentative. Extending classes are supposed to declare compatible return types. Incompatible return types are a syntax error just like in a normal user class. Missing return types, no declaration at all, however, are handled more gracefully. Before PHP 8.1, they were silently ignored. Starting in PHP 8.1 they still work as before, but emit a deprecation notice to PHP’s error output, usually a logfile or the systemd journal. Starting in PHP 9 they will be turned into regular syntax errors.

Why is this good?

Adding types to internal classes helps developers use return values more correctly. Modern editors and IDEs like Visual Studio Code or PhpStorm are aware of class signatures and can inform the users about the intended types just as they write the code. Static analysis tools recognize types and signatures as well as some special comments (phpdoc) and can give insight into more subtle edge cases. One such utility is PHPStan. All together they allow us to be more productive, write more robust code with less bugs of the trivial and not so trivial types. This frees us from being super smart on the technical level or hunting down inexplicable, hard to reproduce issues. We can use this saved time and effort to be smarter on the conceptual level: This is where features grow, this is where most performance is usually won and lost.

Why is this bad?

Change is inevitable. Change is usually for the better, even if we don’t see it at first. However, change brings maintenance burden. In the past, Linux distributions often shipped well-tested but old PHP versions to begin with and release cycles, especially in the enterprise environment, were quite long. Developers would have had to write code that would run on the most recent PHP as well as versions released many years ago. Administrators would frown upon developers who always wanted the latest, greatest versions for their silly PHP toys. Real men use Perl anyway. But this has changed a lot. Developers and administrators now coexist peacefully in DevOps teams, CI pipelines bundle OS components, PHP and the latest application code into container images. Containers are bundled into deployments and somebody out there on the internet consumes these bundles with a shell oneliner or a click in some UI and expects a whole zoo of software to start up and cooperate. Things are moving much faster now. The larger the code base you own, the more time you spend on technically boring conversion work. You can be lucky and leverage a lot of external code. The downside is you are now caught in the intersection between PHP’s release cycle and the external code developer’s release cycles – the more vendors the more components that must be kept in sync. PHP 9 is far away but the time window for these technical changes can be more narrow than you think. After all, you have to deliver features and keep up with subtle changes in the behaviour and API of databases, consumed external services, key/value stores and so on. Just keeping a larger piece of software running in a changing and diverse environment is actually hard work. Let’s look at the available options.

How to silence it – Without breaking PHP 5

You can leverage a new attribute introduced in PHP 8.1 – just add it to your code base right above the method. It signals to PHP that it should not emit a notice about the mismatch.

<?php
class Horde_Ancient_ArrayType implements ArrayAccess {
    /**
     * @return bool PHP 8.1 would require a bool return time 
     */
    #[\ReturnTypeWillChange]
    public function offsetExists(mixed $offset) {
        // Implementation here
    }
...
}

Older PHP that does not know this attribute would just read it as a comment. Hash style comments have been around for long and while most style guides avoid them, they are enabled in all modern PHP versions. This approach will work fine until PHP 9.

How to fix it properly – Be safe for upcoming PHP 9

The obvious way forward is to just change the signature of your extending class.

<?php
class Horde_Ancient_ArrayType implements ArrayAccess {
    public function offsetExists(mixed $offset): bool {
        // Implementation here
    }
...
}

The change itself is simple enough. If your class is part of a wider type hierarchy, you will need to update all downstream inheriting classes as well. If you like to, you can also reduce checking code on the receiving side that previously guarded against unexpected input or just satisfied your static analyzer.
Tools like rector can help you mastering such tedious upgrade work over a large code base though they require non-trivial time to properly configure them for your specific needs. There are experts out there who can do this for you if you like to hire professional services – but don’t ask me please.

<?php
...
$exists = isset($ancient['element1']);
// No longer necessary - never mind the silly example
if (!is_bool($exists)) {
    throw new Horde_Exception("Some issue or other");
} 

Doing nothing is OK – For now

In many situations, reacting at all is a choice and not doing anything is a sane alternative. As always, it depends. You are planning a major refactoring, replace larger parts of code with a new library or major revision? Your customer has signaled he might move away from the code base? Don’t invest.

My approach for the maintaina-com code base

The maintaina-com github organization holds a fork of the Horde groupware and framework. With over 100 libraries and applications to maintain, it is a good example. While end users likely won’t see the difference, the code base is adapted for modern PHP versions, more recent major versions of external libraries, databases, composer as an installer and autoloader. Newer bits of code support the PHP-FIG standards from PSR-3 Logging to PSR-18 HTTP Client. Older pieces show their age in design and implementation. Exactly the amount of change described above makes it hard to merge back changes into the official horde builds – this is an ongoing effort. Changes from upstream horde are integrated as soon as possible.

I approach signature upgrades and other such tasks by grouping code in three categories:

  • Traditional code lives in /lib and follows a coding convention largely founded on PHP 5.x idioms, PSR-0 autoloading, PSR-1/PSR-2 guidelines with some exceptions. This code is mostly unnamespaced, some of it traces back into PHP 4 times. Coverage with unit tests is mostly good for libraries and lacking for applications. Some of this is just wrapping more modern implementations for consumption by older code, hiding incompatible improvements. This is where I adopt attributes when upstream does or when I happen to touch code but I make no active effort.
  • More modern code in /src follows PSR-4 autoloading, namespaces, PSR-12 coding standards, modern signatures and features to an increasing degree. This generally MUST run on PHP 7.4 and SHOULD run on recent PHP releases. This is where I actively pursue forward compatibility. Unit tests usually get a facelift to these standards and PHPStan coverage in a systematic fashion.
  • Glue code, utility code and interfaces are touched in a pragmatic fashion. Major rewrites come with updated standards and approaches, minor updates mostly ensure compatibility with the ever changing ecosystem.

If you maintain a large code base, you are likely know your own tradeoffs, the efforts you keep postponing in favour of more interesting or more urgent work until you have to. Your strategy might be different, porting everything to a certain baseline standard before approaching the next angle maybe. There is no right or wrong as long as it works for you.

bookmark_borderHorde Installer: Recent Changes

The maintaina-com/horde-installer-plugin has seen a few changes lately. This piece is run on every composer install or update in a horde installation. A bug in it can easily break everything from CI pipelines to new horde installations and it is quite time consuming to debug. I usually try to limit changes.

Two codebases merged

In the 2.3.0 release of November 2021 I added a new custom command horde-reconfigure which does all the background magic of looking up or creating config snippets and linking them to the appropriate places, linking javascript from addon packages to web-readable locations and so on. This is essentially the same as the installer plugin does but on demand. A user can run this when he has added new config files to an existing installation. Unfortunately the runtime environment of the installer plugin and the custom command are very different in terms of available IO, known paths and details about the package. I took the opportunity to clean up code, refactor and rethink some parts to do the same things but in a more comprehensible way. As I was aware of the risks I decided to leave the original installer untouched. I got some feedback and used it myself. It seemed to work well enough.

For the 2.4.0 release I decided to finally rebase the installer onto the command codebase and get rid of the older code. It turned out that the reconfigure command was lacking some details which are important in the install use case. Nobody ever complained because these settings are usually not changed/deleted outside install/update phase. As of v2.4.4 the installer is feature complete again.

New behaviour in v2.4

The installer has been moved from the install/update phase to the autoload-dump phase. It will now process the installation as a whole rather than one library at a time. This simplifies things a lot.reviously, the installer ran for each installed package and potentially did a few procedures multiple times. Both the installer and the horde-reconfigure command will now issue some output to the console about their operation and they will process the installation only once with the updated autoloader already configured. The changes will now also apply on removal of packages or on other operations which require a rewrite of the autoloader. The registry snippets now include comments explaining that they are autogenerated and how to override the autoconfigured values.

Outlook to 2.5 or 3.0

The composer API has improved over the last year. We need to be reasonably conservative to support OS distribution packaged older versions of composer. At some point in the future however I want to have a look at using composer for simplifying life

  • Improve Theme handling: Listing themes and their scope (global and app specific), setting default theme of an installation
  • Turning a regular installation into a development setup for specific libraries or apps
  • Properly registering local packages into composer’s package registry and autoloader (useful for distribution package handling).

Both composer’s native APIs and the installer plugin can support improving a horde admin’s or developer’s life:

  • Make horde’s own “test” utility leverage composer to show which optional packages are needed for which drivers or configurations
  • Expose some obvious installation health issues on the CLI.
  • Only expose options in the config UI which are supported by current PHP extensions and installed libraries
  • Expose a check if a database schema upgrade is needed after a composer operation, both human readable and machine consumable. This should not autorun.

The actual feature code may be implemented in separate libraries and out of scope for the installer itself. As a rule, horde is supposed to be executable without composer but this is moving out of focus more and more.

bookmark_borderMaintaina/Horde UTF-8 on PHP 8

On recent OS distributions, two conflicting changes can bring trouble.

MariaDB refuses connections with ‘utf-8’ encoding

Recent MariaDB does not like the $conf[‘sql’][‘charset’] default value of ‘utf-8’. It runs fine if you change to the more precise ‘utf8mb4’ encoding. This is what recent MySQL understands to be ‘utf-8’. You could also use ‘utf8mb3’ but this won’t serve modern users very well. The ‘utf8m3’ value is what older MariaDB and MySQL internally used when the user told it to use ‘utf-8’. But this character set supports only a subset of unicode, missing much-used icons like ☑☐✔✈🛳🚗⚡⅀ which might be used anywhere from calendar events sent out by travel agencies to todos or notes users try to save from copy/pasted other documents.

I have changed the sample deployment code to use utf8mb4 as the predefined config value.

Shares SQL driver does not understand DB-native charsets

The Shares SQL driver does some sanitation and conversion when reading from DB or writing to DB. The conversion code does not understand DB native encodings like “utf8mb4”. I have applied a patch to the share library that would detect and fix this case but I am not satisfied with this solution. First, this issue is bound to pop up in more places and I wouldn’t like to have this code in multiple places. Either the DB abstraction library horde/db or the string conversion library in horde/util should provide a go-to solution for mapping/sanitizing charset names. Any library using the config value should know that it needs to be sanitized but should not be burdened with the details. I need to follow up on this.

Update

See https://github.com/horde/Util/commit/7019dcc71c2e56aa3a4cd66f5c81b5273b13cead for a possible generalized solution.

bookmark_borderA new phase in life

TL;DR – I changed job and this will not affect ongoing maint. of anything Horde

I spent almost my whole work life with a single employer. It was quite a trip. I was part of it as a company grew from a hand full of guys into fifty, then hundred and ever more. I saw how structures developed, how people grew with their tasks and how a brand recognition built. It was a great time and I took advantage of all the opportunities and challenges that came along with it. How many places I traveled. The excitement of speaking at conferences, being a trainer, leading teams, designing architectures.

But after almost 15 years I am at a point where things need to change. Family is a priority now in a different way. Home has a different meaning. I needed a clean break. A few days ago I started a new job with one of Europe’s most relevant software companies. So far everything is shiny and new – I like it.

A little change comes along with it, too. Work life will not involve anything PHP or Horde anymore. There is this new, clear distinction between these things I do for fun or out of private interest on the one hand and earning money on the other. You cannot hire me for freelance work.

Nobody needs to worry. The Maintaina Horde fork is not going away. Development work on PHP 8.1 compatibility and features has not stopped.

Commercial Horde support at B1 Systems will still be around. I had the pleasure to work with an excellent team and that team is well capable of keeping up the expected quality and response times.
If you need any work done for hire, contact that company.

It’s an exciting summer after a pandemic winter. I will possibly take some weeks outside of mailing lists and bug reports to concentrate on things I must do now and things I like to do now. This is going to be fun. Stay tuned for updates.