bookmark_borderAuth Headaches

Back in the old days when rock musicians took the same drugs as your grandfather, authorisation and authentication might have been very simple. You had a user name, you had a password. Most likely you had one and the same password for each and everything. Congrats if you were smarter back then. Maybe your application had a notion of a super user flag or user levels. The more advanced had a permission system but who would ever need that? Well, there will only ever be the need for 5 computers, some researchers argued back in the old days, referring to an even older, albeit questioned, quote.

Today, authentication and authorization are much more complicated. People might still log into a system by user name and password. They might need a second factor like a One Time Password right away or later to perform advanced operations like committing orders. There might be elaborate permission or role-based systems restricting what the user can do on which resources. Users might not have a local password at all but a shadow identity linked to an authentication provider like Google or Github – who are the party assuring to the app that you are in fact the person you claim to be. In an enterprise context, devices might identify their users by SSL Certificates or bearer tokens. Finally the app might have long-lived remember-me cookies separate from their short-lived session tokens/session cookies. These might be bound to specific clients. Changing browsers may put you into a situation where login by user name and password will result in a more elaborate, email-based password.

And on a completely different level you might want to to authorize REST API access to entities either linked to a specific user account or to a specific outside service. Things got elaborate, things got complicated.

Basic Definition: Authentication

Authentication is the process of identifying who is dealing with the application.
This generally involves two orthogonal questions:

How is the authentication communicated?

This is usually achieved by presenting some evidence of authentication to the resource. Showing your passport, you might say. In a stateless API, the evidence is presented with each request. In classic HTTP Basic Authentication, the user first accesses a resource and the resource answers, “Authentication required”. Then the user agent (browser) presents the user a form to enter user name and password. The request against the resource is sent again, along with an authentication header containing the user and password in a transport-friendly envelope (BASE64 encoding) which provides no security by itself. The server will check this information and if it matches, grant access to the resource. The latter is actually authorization – see below. As far as authentication goes, presenting evidence and accepting it is the whole thing. More advanced systems may send a derivative of the actual authentication information. Digest Authentication sends a value computed from user name and password which the server knows or can check against something it knows. The server or any intruder can not deduce from that value what the actual password was. Another derivative mechanism is cookie or bearer token authentication. A new authentication credential is created, for example by sending user name and password to the server (or to a third party) only once. The credential is now sent along with each request to verify it’s you. You might need your passport or driver’s license to acquire a key to your hotel room but once you have it, the key is all you need to get in.

How authentication information is checked

The other major aspect is how the server side keeps the necessary information to verify authentication data. More simply put: How does the server know if your user name and password are legit? User name and password might be stored in a file. The password might better not be stored in the file but a derivative value like a computed hash. This way, if somebody steals the file, he will only know the users but cannot know the passwords. The password (or its hash) might be stored in a database or in an LDAP server. The credentials might be sent to an authentication API. In some cases, the server does not have to store any authentication data. This is true when the authentication data contains means to verify that it has been created by a trusted third party, is time-limited and has not been tampered with. Finally, the server might not care at all. A traditional chat service may receive your user name and create a session key. This key is used to understand who sent or asked for what. As long as you are logged in and keep communicating, no new session for this user can be created. Once you are out for long enough, the session expires and anybody can use the same name again. Having to deal with passwords may be an unwanted complexity. Authentication is identifying you by any (sufficient) means.

You know it’s drivin’ me wild – Confusion

Traditional systems have mixed emotions about their guardian angel. As said above, they may mix up knowing who asks (authentication) with knowing if they deserve to receive (authorization). They may also have a notion of an “authentication driver” which might emphasize one aspect over the other, assuming that it is either well-established or irrelevant how the password arrives at the server. New systems should have a clear understanding of both aspects and may link multiple combinations of both receiving and checking credentials to the same identity or user account.

Basic Definition: Authorization

Authorization is the process of deciding if a requester (who could be authenticated or anonymous) is authorized to interact with a resource or system. A concert hall or a renaissance faire may check your authorization to enter by a ticket, a stamp on the hand or ribbon, wrist band. They may not give an elaborate thought about who you are. Why should they care? At its core, a username/password authentication system is just checking by a password if you are authorized to identify as a certain user. It may be beneficial to tie a password to an identity. This identity may have permissions and other attached data which it will keep even when it changes its password. Other systems assign authorization to the token itself which is both the password and the identity. In this case, when the token expires, the identity will expire, too.

DAC: Discretionary Access Control and Permissions

Any system that discerns access by the identity of a user can be considered a DAC system. Permissions may be assigned to a user identity directly or to a named group. The users’ being part of the group can be verified through his identity, hence his access level. This can include special pseudy-groups like “all authenticated users” and “guests” or non-authenticated users. Most systems need to expose at least the means of authentication to a yet unauthenticated client. The horde/perms and horde/share libraries implement such a DAC system. Most DAC systems are cumulative: By being member of more groups, a user can only gain more privileges but not lose them. In practice, it might be easier to define a privilage and allow access if a user does NOT have it (and maybe have some other) rather than trying to work out how to allow negative privileges within the actual system. In a wider sense, countable limits like allowing to upload ten pictures or reading 5 articles per day can also be expressed in a DAC system.

MAC: Mandatory Access Control

Mandatory access control is an evolution from DAC in which acess is defined by policies. These allow or prevent a user from sharing a resource with a defined audience. There is little provision for individual exceptions.

RBAC: Role-Based Access Control

Role-Based Access Control systems combine previous concepts. Multiple permissions on resources are assigned to a role and subjects or identities are authorized for these roles. Who grants this authorization is not defined by the system – usually it is is the person with the role of “approver” on the specific “role” resource. A system may define that a user role is needed to even apply for further roles or application is not possible at all and roles are centrally assigned. Extended RBAC systems can model composed roles out of other roles. They may also define policies for mutually exclusive roles – A person may not apply for a role for which he is the approver or a person may not approve his own application for a role, even if he is allowed to apply for the role and has the authority to approve. A ticket system may ask a user if he is in the requester role or in the processor role and may access to different queues and commands based on that decision. In sports, you might be a player in one game or league and a referee in another but you are not allowed to combine both roles’ permissions during a game. This prevents undesirable situations. A system may ask the top administrator to choose if he is currently acting as the administrator or as a regular user and prevent him from mixing both types of access at once.

Two Factor Authentication and Weak Authentication

Many modern systems combine a primary authentication mechanism like username/password with additional aspects. A user may need to solve a captcha to gain the authority to enter his password.
A user may temporarily lose access to the login mechanism if the same IP address has tried to authenticate too many times within a time span. A user may be authenticated by a certificate or long living device cookie but needs to add password authentication or email verification before he has access to some functions, even if his user rank, role, permission, group membership or whatever is otherwise sufficient. One API call may be used in a UI scenario through a short lived session token and in an integration scenario using a separately scoped access token but not through a user / password combination. Finally there are One Time Password mechanisms which are only practical if they are limited either to specific requests like transferring money or are required periodically – like once every 24 hours. Keeping mechanisms nicely separated and combining requirements on a more abstract level is crucial. Trying to make a single mechanism powerful and flexible enough can end up making it overly complex and impractical to use. If you think of PSR-7 middlewares handling a HTTP request in PHP, a middleware’s job may be limited to fetching a credential from a header and calling into a backend or multiplexer. The result is stored back into the request as an extra attribute, leaving it to another middleware further down the line to process the result and implement consequences like an error message, a redirect to a login screen or determining which set of roles or permissions is whitelisted for this login type. By enabling or disabling middlewares for a specific request, complexity increases or decreases.

Challenging backend services

There is an obvious issue with scenarios in which multiple types of credentials may identify and authorize a user to access the system: In each case the system must be able to access its backend resources. This can be trivial for a global resource like a database accessed through a system wide application credential. It can be more tricky if you try to access a user-specific IMAP backend or an LDAP directory which has its own, completely separate notion of access control. There are several ways to tackle this but I will leave this to another article.

bookmark_borderHorde/Yaml: Graceful degradation

Tonight’s work was polishing maintaina’s version of horde/yaml. The test suite and the CI jobs now run successfully both on PHP 7.4 and PHP 8.1. The usual bits of upgrading, you might say.

However, I had my eye on horde/yaml for a reason. I wanted to use it as part of my improvements to the horde composer plugin. Composer famously has been rejecting reading any yaml files for roughly a decade so I need to roll my own yaml reader if I want to deal with horde’s changelog, project definition and a few other files. I wanted to keep the footprint small though, not install half a framework along with the installer utility.

You never walk alone – There is no singular in horde

The library used to pull in quite a zoo of horde friends and I wondered why exactly. The answer was quite surprising. There is no singular in horde. Almost none of the packages can be installed without at least one dependency. In detail, horde/yaml pulled in horde/util because it used exactly one static method in exactly one place. It turned out while that method is powerful and has many use cases, it was used in a way that resulted in a very simple call to a PHP builtin function. I decided whenever the library is not around I will directly call that function and lose whatever benefits the other library might grant over this. This pattern is called graceful degradation. If a feature is missing, deliver the next best available alternative rather than just give up and fail. The util library kept installing although the yaml parser no longer needed it. The parser still depended on the horde/exception package which in turn depended on horde/translation and a few other helpers. Finally horde/test also depended on horde/util. It was time to allow a way out. While all of these are installed in any horde centric use case, anybody who wants only a neat little yaml parser would be very unhappy about that dependency crowd.

Alternative exceptions

The library already used native PHP exceptions in many places but wrapped Horde exceptions for some more intricate cases. While this is all desirable, we can also do without it. If the horde/exception package is available, it will be used. Otherwise one of the builtin exceptions is raised instead. This required to update the test suite to make it run correctly either way. But what is the point if the test suite will install horde/util anyway?

Running tests without horde/test unless it is available

I noticed none of the tests really depended on horde/test functionality. Only some glue code for utilities like the horde/test runner or horde/components really did anything useful. I decided to change the bootstrap code so that it would not outright fail if horde/test was not around. Now the library can be tested by an external phpunit installation, phar or whatever. It does not even need a “composer install” run, only a “composer dump-autoload --dev” to build the autoloader file.

A standalone yaml parser

The final result is a horde/yaml that still provides all integrations when run together with its peer libraries but can be used as a standalone yaml parser if that is desirable. I hope this helps make the package more popular outside the horde context.

Lessons learned

Sometimes less is more. Factoring out aspects for reuse is good. Factoring out aspects into all-powerful utility libraries like “util”, “support” and the likes can glue an otherwise self contained piece of software together with too many other things. That makes them less attractive and harder to work with. Gracefully running nevertheless is one part. The other is redesigning said packages which cover too many aspects at once. This is a topic for another article in another night though.

bookmark_borderHorde Installer: Recent Changes

The maintaina-com/horde-installer-plugin has seen a few changes lately. This piece is run on every composer install or update in a horde installation. A bug in it can easily break everything from CI pipelines to new horde installations and it is quite time consuming to debug. I usually try to limit changes.

Two codebases merged

In the 2.3.0 release of November 2021 I added a new custom command horde-reconfigure which does all the background magic of looking up or creating config snippets and linking them to the appropriate places, linking javascript from addon packages to web-readable locations and so on. This is essentially the same as the installer plugin does but on demand. A user can run this when he has added new config files to an existing installation. Unfortunately the runtime environment of the installer plugin and the custom command are very different in terms of available IO, known paths and details about the package. I took the opportunity to clean up code, refactor and rethink some parts to do the same things but in a more comprehensible way. As I was aware of the risks I decided to leave the original installer untouched. I got some feedback and used it myself. It seemed to work well enough.

For the 2.4.0 release I decided to finally rebase the installer onto the command codebase and get rid of the older code. It turned out that the reconfigure command was lacking some details which are important in the install use case. Nobody ever complained because these settings are usually not changed/deleted outside install/update phase. As of v2.4.4 the installer is feature complete again.

New behaviour in v2.4

The installer has been moved from the install/update phase to the autoload-dump phase. It will now process the installation as a whole rather than one library at a time. This simplifies things a lot.reviously, the installer ran for each installed package and potentially did a few procedures multiple times. Both the installer and the horde-reconfigure command will now issue some output to the console about their operation and they will process the installation only once with the updated autoloader already configured. The changes will now also apply on removal of packages or on other operations which require a rewrite of the autoloader. The registry snippets now include comments explaining that they are autogenerated and how to override the autoconfigured values.

Outlook to 2.5 or 3.0

The composer API has improved over the last year. We need to be reasonably conservative to support OS distribution packaged older versions of composer. At some point in the future however I want to have a look at using composer for simplifying life

  • Improve Theme handling: Listing themes and their scope (global and app specific), setting default theme of an installation
  • Turning a regular installation into a development setup for specific libraries or apps
  • Properly registering local packages into composer’s package registry and autoloader (useful for distribution package handling).

Both composer’s native APIs and the installer plugin can support improving a horde admin’s or developer’s life:

  • Make horde’s own “test” utility leverage composer to show which optional packages are needed for which drivers or configurations
  • Expose some obvious installation health issues on the CLI.
  • Only expose options in the config UI which are supported by current PHP extensions and installed libraries
  • Expose a check if a database schema upgrade is needed after a composer operation, both human readable and machine consumable. This should not autorun.

The actual feature code may be implemented in separate libraries and out of scope for the installer itself. As a rule, horde is supposed to be executable without composer but this is moving out of focus more and more.

bookmark_borderMaintaina/Horde UTF-8 on PHP 8

On recent OS distributions, two conflicting changes can bring trouble.

MariaDB refuses connections with ‘utf-8’ encoding

Recent MariaDB does not like the $conf[‘sql’][‘charset’] default value of ‘utf-8’. It runs fine if you change to the more precise ‘utf8mb4’ encoding. This is what recent MySQL understands to be ‘utf-8’. You could also use ‘utf8mb3’ but this won’t serve modern users very well. The ‘utf8m3’ value is what older MariaDB and MySQL internally used when the user told it to use ‘utf-8’. But this character set supports only a subset of unicode, missing much-used icons like ☑☐✔✈🛳🚗⚡⅀ which might be used anywhere from calendar events sent out by travel agencies to todos or notes users try to save from copy/pasted other documents.

I have changed the sample deployment code to use utf8mb4 as the predefined config value.

Shares SQL driver does not understand DB-native charsets

The Shares SQL driver does some sanitation and conversion when reading from DB or writing to DB. The conversion code does not understand DB native encodings like “utf8mb4”. I have applied a patch to the share library that would detect and fix this case but I am not satisfied with this solution. First, this issue is bound to pop up in more places and I wouldn’t like to have this code in multiple places. Either the DB abstraction library horde/db or the string conversion library in horde/util should provide a go-to solution for mapping/sanitizing charset names. Any library using the config value should know that it needs to be sanitized but should not be burdened with the details. I need to follow up on this.

Update

See https://github.com/horde/Util/commit/7019dcc71c2e56aa3a4cd66f5c81b5273b13cead for a possible generalized solution.

bookmark_borderMaintaina Horde: Tumbleweed and PHP 8.1

PHP 8.1 is available off the shelf in openSUSE Tumbleweed. I will shortly prepare a PHP 8.1 / tumbleweed version of the maintaina Horde containers. These will initially be broken due to some outdated language constructs. As PHP 7.4 will EOL by the end of this year, I decided not to bother with PHP 8.0 and ensure compatibility with PHP 8.1 right away, while staying compatible with PHP 7.4 until end of year. This is not fun. PHP 8.x provides several features which allow for more concise code. I will not be able to use them.
This also means that for the time being I will produce code which you may find more verbose than necessary. While Constructor promotion is mostly about being less verbose, Readonly Properties and Enums kill some of the pro-method arguments in the eternal discussion if getter methods or public properties are more appropriate interfaces. Union Types and Intersection Types allow a flexibility of method interfaces which PHP 7.4 can only emulate. You can get far by type hints for static analysis combined with boilerplate guard code inside a method and dropping type hints all along or using insufficient surrogate interfaces. But it is really not shiny. Maintaining software which shows its age has its tradeoffs.

bookmark_borderSimplifying Routing / PSR-15 bootstrap in Horde

As you might remember from a previous post, Horde Core’s design is more complex than necessary or desirable for two main reasons:

  • Horde predates today’s standards like the Composer Autoloader and tries to solve problems on its own. Changing that will impair Horde’s ability to run without composer which we were hesitant to do, focusing on not breaking things previously possible.
  • Horde is highly flexible, extensible and configurable, which creates some chicken-egg problems to solve on each and every call to any endpoint inside a given app.

Today’s article concentrates on the latter problem. More precisely, we want to make routing more straight forward when a route is called.

A typical standalone or monolithic app usually is a composer root package. It knows its location relative to the autoloader, relative to the dependencies and relative to the fileroot of the composer installation. Moreover, the program usually knows about all its available routes. It will also have some builtin valid assumptions about how its different parts’ routes relate to the webroot.
None of this is true with a typical composer-based horde installation.

  • None of the apps is the root package
  • Apps are exposed to a web-readable subdir of the root package
  • While the relative filesystem location is known, each app can live in a separate subdomain, in the webroot or somewhere down the tree
  • Each app may reconfigure its template path, js path, themes path
  • The composer installer plugin makes a sane default. This default can be overridden
  • Each app can be served through multiple domains / vhosts with different registry and config settings in the same installation
  • Administrators can add local override routes
  • Parts of the code base rely on horde’s own runtime-configurable autoloader rather than composer.

This creates a lot of necessary complexity. The router needs to know the possible routes before it can map a request. To have the routes, the context described above must be established. Complexity cannot be removed without reducing flexibility. There is, however, a way out. The routing problem can be divided into three phases with different problems:

  • Development time – when routes are defined and changed frequently for a given app or service
  • Installation/Configuration time – when the administrator decides which apps’ routes will be available for your specific installation
  • Runtime – when a request comes in and the router must decide which route of which app needs to react – or none at all

Let’s ignore development time for now. It is just a complication of the other two cases. The design goal is to make runtime lean and simple. Runtime should initialize what the current route needs to work and as little as possible on top of that. Runtime needs to know all the routes and a minimal setup to make the router work. Complexity needs to be offloaded into installation time. Installation time needs to create a format of definite routes that runtime can process without a lot of setup and processing. As a side effect, we can gain speed for each individual call.

Modern Autoloaders are similar in concept: They have a setup stage where all known autoloader rules of the different packages are collected. In composer, the autoloader is re-collected in each installation or update process. The autoloader is exposed through a well-known location relative to the root package, vendor/autoload.php – it can be consumed by the application without further runtime setup. The autoloader can be optimized further by processing the autoloading rules into a fixed map of classes to filenames (Level 1), making these maps authoritative without an attempt to fail over on misses (Level 2a) and finally caching hits and misses into the in-memory APCu opcache. Each optimization process makes the lookup faster. This comes at the cost of flexibility. The mapping must be re-done whenever the installation changes. Otherwise things will fail. This is OK for production but it gets in the way of development. The same is true for the router.

The best optimization relies on the application code and configuration being static. The list of routes needs to be refreshed on change. Code updates are run through composer. The composer installer plugin can automatically refresh the router. Configuration updates can happen either through the horde web ui or through adding/editing files into the configuration area. Admins already know they need to run the composer horde-reconfigure command after they added new config files or removed files. Now they also need to run it when they changed file content. In development, routing information may change on the fly multiple times per hour. Offering a less optimized, more involved version of this route collection stage can help address the problem.

A new version of the RampageBootstrap codebase in horde/core is currently in development. It will offload more of horde’s early initialisation stages into a firmware stack and will reduce the early initialisation to the bare minimum. At the moment, I am still figuring out how we can do this in a backward compatible way.

bookmark_borderRdo: Persistence is not your model

Remember that post on how your backend might betray you? You can store your Turba addressbook into an LDAP tree, but if the addressbook is manipulated from LDAP side, your CardDAV Sync may be ignorant of this. The bottom line is: You cannot trust the backend. Nor should the persistence model govern your application internal model.

The development team at B1 works with a lot of inherited code from different areas and eras of FOSS and closed-source development. We see all styles of code and we have made all sorts of bad micro decisions ourselves over the years. One particulary hard question that comes up time and again is what is the “true” model of data in an application. Developers who come from traditional, monolithic web applications may say the true model is what is in the (relational) database and point out that it will closely influence the application’s internal model. Developers with a background in APIs and distributed apps will tend to think the messages going in and out of a service are the main thing. Application will be modeled after the messages and persistence is a secondary thought. Then we have the school of DDD-trained developers who will say neither is right. The internal model should be mostly ignorant of persistence and external API is just another type of persistence.

Let’s explore the benefits and impacts of these options.

DB-centric applications

In a DB-centric application, your entities closely represent rows of data in DB tables. If you use ActiveRecord or DataMapper style ORMs, you will often use the ORM Entities as your business objects in the application. This works fairly well for CRUD style applications where most if not all fields in the database will be editable form fields on screen.

If your application does little beyond storing, retrieving, updating and deleting “records” of whatever type, you will have a straight-forward development experience. Some frameworks may support autogenerating create/view/edit forms and searchable lists (html tables) out of your DB schema. Because it is the easiest way to add features to these types of application, the object model tends to be rich in attributes and limited in relations and segregation.

For example, it would be common for a “customer” entity to include separate fields for the customer’s street, city, country, email address… some fields would be marked as mandatory and others as optional, each accessible via getters and setters or public properties. This becomes cumbersome for entities which have subtypes with different sets of attributes or behaviours. Fields may be mandatory for one subtype, but optional or even forbidden for other subtypes. Attribute values are usually primitives (string, int) that can be mapped to DB column types easily. Developing these types of apps is really fast and concise as long as you do nothing fancy. As soon as you have any real amount of business logic and variance, it becomes cumbersome to maintain and hard to test.

API-centric applications

A similar school of thoughts comes out of microservice development. In many cases, CRUD services can be prototyped from autogenerated code defined through a Swagger/OpenAPI definition file. Sometimes there is little to do after this autogeneration, including persistence to the relational database. If you use a non-relational persistence like CouchDB, you may even get along with some variance and depth in the schema. Even a json or yaml file on disk gets you very far (at least in the prototype stage).

However, you will have scenarios where the same application object looks quite different between API requests. Users may have limited permissions on querying details of an object, different APIs with different use cases may have limited needs on the deeper details of objects. Retrieving your list of customer IDs and customer names for populating a dropdown is different from being able to view their billing data or contact persons’ details.

Domain Model centric approach

If your application has to deal with structured aggregates and consistency rules, exhibits rich behaviour, services multiple types of backends or is otherwise complicated, this might be the right approach.

At its core, the (micro)application is fairly ignorant of both persistence, export formats and external API.
The domain model deviates from persistence model or message representations. It has internal constraints and business rules. A domain entity is retrieved from repository in valid and complete state and all transformations will result in valid state. Transformations usually happen through defined operations rather than accessing a set of getters and setters. Setters may even not exist for many types of domain objects.

Through this, you can trust your objects by contract and reduce validations in the actual operations. This comes at a price: In a domain centric application, your classes may double or triple compared to other approaches.

You have your business object with all the internal integrity aspects. You have another version of the same object which is used for I/O – the so-called Data Transfer Object or DTO. There are versions where DTOs are either typed classes or untyped plain objects with just arbitrary public attributes. They may even just be structured arrays. Each option comes with a drawback. These world-readable objects expose all their relevant data to some kind of I/O – views rendered on the server side, files to export, REST API messages coming in or going out. A third version of your entity may be the persistence model(s), especially in relational databases. They may decompose your domain entity into multiple database tables, LDAP tree nodes or even multiple representations in no-sql databases. The main effort is getting all these transformations right. This type of application may seem verbose and repetitive. On the other hand, a strong object model with hard internal constraints and few dependencies on unrelated aspects is very straight-forward to unit test. This makes it suitable for large scale applications.

Compromises and practice

You do not have to commit to one approach with all consequences. This might even make less sense in languages which do not really enforce type hints (like python) or have no enforced concept of private and protected properties (again, like python). In many cases, you can get very far by discipline alone. Just restrict usage of setters to the persistence and I/O parts of your application. Add methods and behaviour to your persistence layer entities and even better, add an interface. Hint your business logic against the interface. You can also make your ORM mapper double as a Repository. You can scale out to real domain objects step by step as needed. Add conversion methods which turn a domain entity into one or many ORM entities and commit them to the DB. If your aggregate is not very suitable for this, hide your ORM mappers in a dedicated repository class. You can scale out as little or as much as your needs dictate.

On the other hand, you may abuse your ORM entities to populate views or data formats like XML, json or yaml.

Why we move away from putting logic into Rdo entities

Rdo is Rampage Data Objects, Horde’s minimalistic ORM solution. We have come a long way using the described tricks to make Horde_Rdo_Mappers work as repository implementations and Horde_Rdo_Base entities as business objects. However, we run into more and more situations where this is not appropriate. Using and lazy-loading relations to child objects is fine in the read use case, but persisting changes to such structures becomes tedious and error prone. Any logic tightly coupled to Rdo objects is also hard to unit test. Rdo-centric development does not mix well with the Shares library or with handling users and identities. As Rdo entities carry around references to the mapper and the database driver, they tend to bloat debug output. More fundamentally, our object models are evolving to a design which does not really look like database tables at all. We still love Rdo and may resort to Rdo entities with interfaces here and there, especially in prototyping. But our development model has long shifted towards unittest-early or unittest-driven rather.

Beware of the edges

Domain Driven Design purists may emphasize how domain models will rarely have attribute setters and some will even argue you should minimize your getters (tell don’t ask). However, at some point we need to get raw attributes to compose messages to the edges, to answer API requests or to persist any objects. There are multiple ways to do it, with different implications. Remember the Data Transfer Objects? I tend to have method on my domain aggregate to “eject” a fully detailed world readable object. In many cases, it is even typed. This chatty object can now be used for Formatters to transform them into API messages or data files. So the operation would be: Construct the domain aggregate – and fail if data is invalid – and from the domain aggregate, construct the DTO. Use the DTO for chatty jobs.

But how about the other way around? Messages from the outside can be malformed in many ways. They may be tampered with, they may miss details by design. If I exported an object to a user’s limited API and he sends an update, how to handle that? If an API user does not even know advanced attributes that are subject to other views and use cases, how would he create new entities?

I tend to have a method on the repository which accepts these messages and tries to turn them into Domain Objects. If the partial message contains some ID field or a sufficient set of data to form a unique key, I will first ask the backend to get the original details of the object and then apply the message details – and fail if this violates the constraits of my aggregate. If no previous version exists in the backend, missing information is added from defaults and generators, including a unique id of some sort. The message is applied on top.

The is approach may be expensive. I am creating full domain objects just to render database content to another format which may not even contain most of the retrieved data. On the other hand, if I know an object exists in a relational db, I could use some cheap SQL to apply partial changes without ever constructing the full model.

Creating straight-to-output repos with optimized queries is a tradeoff. They mean additional maintenance burden whenever the model changes, additional parts to cover with tests. I only do this when performance actually becomes an issue, not by default.

Creating straight-to-persistence methods for partial messages is even less desirable. It is circumventing integrity check at code level. However, sometimes the message itself is the artifact to persist – to be later processed by a queue or other asynchronous process.

bookmark_borderPHP 8 Horde (Maintaina)

Over the next few days, all Horde libraries and apps in the maintaina-com organization will be whitelisted for PHP 8x. in their FRAMEWORK_6_0 branch development versions. One next step will be a flavour of the OpenSUSE based containers and deployments which runs off PHP 8.0. While some few libraries have been enabled for PHP 8, it is almost certain that horde as a whole will not run correctly. Main culprits are the horde/rpc and horde/form packages and their user code, but there are some other ugly places that need attention.

Development Baseline at 7.4

Code in the maintaina-com repo will stay compatible with PHP 7.4 – at least for the time being. Decisions at Horde LLC may override that at some point or time may just march on. PHP 7.4 has been released two years ago, has ended active support 20 days ago and will be EOLed for upstream security support on November 28th 2022 – roughly 11 months to go. Linux distributions have a tradition to follow their own schedules and backport security fixes. OpenSUSE LEAP 15.3 ships with PHP 7.4 while openSUSE Tumbleweed has switched to PHP 8.0.13 – with PHP 8.1 versions becoming available from official repos soon.

This is a tough decision as PHP 8 and 8.1 have some really interesting features which would allow us to develop more elegant, more readable and more efficient code. For software that is not intended for this audience, I will immediately allow using 8.x-only features as soon as we are confident with Horde’s compatibility. This is going to be a major theme of January and possibly February.

No need to switch right now

If you are running Horde as of horde.org master branches or maintaina-com FRAMEWORK_6_0 branches off PHP 7.4, you should NOT switch right now. We will announce once we think any leftover issues are minor enough for an acceptable early adopter experience.

No particular love for 8.0.x

There is no guarantee our runtime will stay fixed at 8.0. PHP 8.1 offers a lot of new features and a considerable performance boost for some relevant scenarios. While making Maintaina Horde work with 8.x on a 7.4 feature baseline is the first step, the logical next step is upgrading feature baseline to 8.1 or higher. This will be much less of a problem if we get an official Horde 6 release in the meantime and users can choose between a properly conservative release version and a more adventurous Maintaina version. This is not something I have under control though. Horde LLC do as they find appropriate and sustainable and for many users, there is little reason to choose Maintaina over the official releases once we have a Horde 6 version that properly runs on recent PHP and supports Composer out of the box. I am perfectly fine with that and looking forward to it. I will always assist with a migration path as far as I can afford to.

Time is Money, Money buys Time

If you have an urgent commercial interest in a PHP 8-ready Horde version, you really do not want to rely on Maintaina’s timelines and priorities which may be subject to change. You will need to spend money. Approach somebody to do it for you, either Horde LLC or the company I work for, B1 Systems GmbH – both are formidable places to look for Horde-experienced development resources.

Update 2021-12-18 21:00 CET

I just ran the update to the metadata as a mass operation for everything which contains a .horde.yml file – the rest will have to wait until I stumble across it. I leveraged an edited version of horde/git-tools, some bash magic, some mass editing in vscode using their regex tool and some manual fixing.

  • All packages now formally require “php”: “^7.4 || ^8”
  • If horde-installer-plugin is required, I now go for “^2 || dev-FRAMEWORK_6_0” – however in maintaina-com/Core, I have a job that rebuilds composer.json on commit and this job showed me that the components tool needs an update in this aspect.
  • SPDX license code warnings for LGPL and GPL versions have been remedied to LGPL-2.0-only, LGPL-3.0-only, GPL-3.0-only each
  • Added the CI workflow where missing. Mostly it will fail until further editing. This is intentional.
  • I did NOT unify all versions of CI workflow as some deviations are intentional. I did however unify PHP versions for the unit tests to “7.4”, “8.0” and “latest” and I did unify phpunit versions to “9.5” and “latest”.
  • Unified/added the phpdoc workflow and the update-satis workflow as we had multiple versions for no good reason. I have settled for a version of the phpdoc job that will scan lib/, src/ and app/ if they exist
  • Cleaned up a lot of metadata mess in the Kolab related packages.
  • Removed some version: tags from composer.json files
  • Removed the optional pear dependency of imp for the ASN1 implementation from phpseclib – need to look for a proper composer-ready and less outdated replacement.

While the mass changes themselves seem to have gone right, the resulting avalanche of CI jobs showed some issues:

  • phpdoc job and update-satis job fail if they run in parallel and the satis repo content has changed since checkout. Either give the push commands in the loop a minute to wait each time or make the job smarter about handling these clashes. Still, failing is better than silently overwriting content
  • Having so many versions of the CI job is not maintainable. Need to factor out the boilerplate into an action, make version requirements a config variable with a builtin default and have some mechanism for there rare cases where extra software is needed for meaningful QA, i.e. database and storage related items.
  • After getting this migration done, upgrading the git-tools utility may be an interesting exercise in PHP 8 and PHPStan.
  • I may have created unnecessary conflicts with some open pull requests. Sorry, contributors. I will improve.

bookmark_borderHorde/Skeleton: Modernized

Over the last months, a lot of new technologies have entered the Horde ecosystem. It was long overdue to modernize the Skeleton example app.

No more un-namespaced code

Skeleton has completely migrated to PSR-4 namespaced code in /src/ rather than traditional, unnamespaced code in /lib/. This includes framework integration classes like Application, Api, Ajax\Application but also the portal blocks.

All application internal classes of skeleton are now served via the Composer Autoloader and follow the PSR-4 standard. This requires the very latest releases of horde/horde (6.0.0alpha6) and horde/core (v3.0.0alpha9) to work correctly. The only exception is the database schema migration which intentionally does not follow regular autoloading conventions.

No more index.php

Client pages that traditionally called into the application’s internal classes have been removed and replaced by routes. This includes the index.php file. A default route handles the skeleton/ and skeleton/index.php cases. The contents of the example UI have not been changed. They are only implemented differently

Using horde/routes and horde/http_server

The new skeleton uses horde/routes to describe available routes in the app and horde/http_server to implement the controller classes behind these routes. The code comes with extensive documentation comments. horde/http_server implements the PSR-15: HTTP Server Request Handlers standard used by most modern PHP Frameworks.

Inter-App API

Skeleton now includes an example of the inter-app API implemented through the registry. The same interface is used as the basis for json-rpc and XMLRPC APIs.

Full Backward Compatibility

The changed libraries still work with unnamespaced or partially converted apps. Implementers can work according to their own schedule. However, there are some rules to keep in mind:

  • The namespaced versions of Application, Api, Test and Ajax\Application take precedence over their unnamespaced counterparts. Implementers can leave the unnamespaced code as-is or turn it into wrappers like
    lib/Application.php
    
    <?php
    /**
     * Backward compatibility wrapper.
     *
     * @deprecated Call into Horde\Skeleton\Application directly instead. 
     */ 
    Skeleton_Application extends Horde\Skeleton\Application {}
    
    
    
  • Portal Blocks should NOT be wrapped or duplicated. They should exist as either namespaced or unnamespaced versions. You can have both types in the same application, but if you have a wrapped or copied block, it will show up twice.
  • Ajax application handler classes only have one integration point, /{lib, src}/Ajax/Application.php. You can upgrade them any time, just change the reference in the Application class. The Ajax Application class itself can be duplicated or wrapped, but the namespaced version will always be chosen. The wrapper would be just a transitional backwards compatibility measure so your application still works with earlier alpha versions of the framework.

Not in this release

The current version of skeleton still leaves room for improvement. Not all external libraries used in the code base are already namespaced or otherwise modernized. The PageOutput helper is still emitting output which needs to be caught and redirected to the output stream as part of the PSR-7 HTTP Response object. Future versions should use a stream-ready implementation to reduce boilerplate. Also, there should be some ready-made controllers for standard cases like UI output or REST.

References

bookmark_borderHorde/Log Rewrite goes PSR-3

I have rewritten Horde/Log based on the PSR-3 Logging standard published by PHP-FIG.

Why?

It had to been done at some point. The current wave of Corona pandemic has cancelled some joyful other activities planned for this weekend and I had been looking into PSR-3 loggers for quite some time. Most importantly, I wanted to do something else which needed a separeate logging facility and I was not ready to invest time into the various pitfalls of the old logger design. Just look at the Logger Factory – it is much too complex. Currently, there is no good balance between having too few logs in general or being flooded with mostly useless details of all the different aspects of horde. Filtering is essential, but the data cannot easily be divided into the contexts that are relevant to different problems and tasks.

Goals

My main goal was simple: Consuming code should be able to give the Logger more context about the messages it sends. This can be helpful to sort out what is interesting and what is just distracting noise. For example, a logger-aware library or application may send markers along with the actual message. All messages go to the same logger, but different log handlers may be set up to only care about certain aspects. Want to write all caldav sync errors for a specific user to a separate file? Want to keep a separate log of failed login attempts? Want to forward your time tracking application’s “project closed” log events to an external json-consuming system? Even though the old logger had all the necessary parts, it made these tasks too difficult.

As a product, the new logger is not very interesting outside the Horde context. While it can be used for logging in PSR-3 aware libraries, it still has too many dependencies on other parts of the Horde ecosystem. To reduce this, I may factor out the Constraints log filter into a separate library. The opposite case is more interesting: If PSR-3 replaces tight coupling to a custom logger, projects may use their existing logger. They have one less alien dependency to deal with. This might make some libraries more attractive, like Horde/Activesync or Horde/Imap.

From a code quality perspective, I also wanted to make the code more transparent to readers and tools. The old implementation relied on a __call magic method without really needing it, multiple parts relied on tersely documented array structures. The new implementation passes PHPStan Level 8. Coverage with parameter and return type hints is very high, with native property and return types following where possible. The current implementation is based on version 1.1.4 of the standard. When moving to a PHP 8 minimum requirement, this can be upgraded to the more strictly typed version 3.0 standard.

However, I am still missing unit tests against the new code. As it is substantially different from the H5 implementation, I could not easily adapt the existing test cases. This will require more work.

Also, integration of the new Logger into the core system is a separate task. Old and new logger infrastructure will have to coexist for some time. There is simply too much code that needs to be touched.

Architecture

The logger is architected as a modular system. The consuming code only has to deal with the Horde\Log\Logger. It implements the Psr\Log\LoggerInterface. The logger can support custom log levels not covered by RFC 5424. Log Levels are implemented as objects with a string name and a criticality number value.

The PSR-3 standard mandates log messages may be strings or any object that can be turned into a string. Internally, we convert them into LogMessage objects containing the string message, a reference to the LogLevel object and a hash of context attributes.

LogFilters are gatekeepers which look into a LogMessage and decide if it may be logged. They can be used as global LogFilters to suppress a log message altogether or as local filters which only affect a certain log handler. The Logger may include many different LogHandlers. These implement the actual processing of logs, writing them to a file, to a local syslog program or sending them over the network. Log messages may further be formatted for different needs. One handler may want to send XML documents to another server, another handler may store plaintext in a structured file. PSR-3 proposes a templating format where the logger can fill placeholders in the message with data from the context array. In Horde/Log, this job is done by a series of LogFormatters. Depending on configuration, a LogHandler can have zero, one or many such LogFormatters. Not all combinations make sense.

PHP 8 readiness

The new code is ready to run on PHP 7.4 and PHP 8. However, the horde/constraint and horde/thrift dependencies have not yet been upgraded for PHP 8, limiting usefulness. This will be done as time permits, with many other topics having higher priority.

References