bookmark_borderRdo: Persistence is not your model

Remember that post on how your backend might betray you? You can store your Turba addressbook into an LDAP tree, but if the addressbook is manipulated from LDAP side, your CardDAV Sync may be ignorant of this. The bottom line is: You cannot trust the backend. Nor should the persistence model govern your application internal model.

The development team at B1 works with a lot of inherited code from different areas and eras of FOSS and closed-source development. We see all styles of code and we have made all sorts of bad micro decisions ourselves over the years. One particulary hard question that comes up time and again is what is the “true” model of data in an application. Developers who come from traditional, monolithic web applications may say the true model is what is in the (relational) database and point out that it will closely influence the application’s internal model. Developers with a background in APIs and distributed apps will tend to think the messages going in and out of a service are the main thing. Application will be modeled after the messages and persistence is a secondary thought. Then we have the school of DDD-trained developers who will say neither is right. The internal model should be mostly ignorant of persistence and external API is just another type of persistence.

Let’s explore the benefits and impacts of these options.

DB-centric applications

In a DB-centric application, your entities closely represent rows of data in DB tables. If you use ActiveRecord or DataMapper style ORMs, you will often use the ORM Entities as your business objects in the application. This works fairly well for CRUD style applications where most if not all fields in the database will be editable form fields on screen.

If your application does little beyond storing, retrieving, updating and deleting “records” of whatever type, you will have a straight-forward development experience. Some frameworks may support autogenerating create/view/edit forms and searchable lists (html tables) out of your DB schema. Because it is the easiest way to add features to these types of application, the object model tends to be rich in attributes and limited in relations and segregation.

For example, it would be common for a “customer” entity to include separate fields for the customer’s street, city, country, email address… some fields would be marked as mandatory and others as optional, each accessible via getters and setters or public properties. This becomes cumbersome for entities which have subtypes with different sets of attributes or behaviours. Fields may be mandatory for one subtype, but optional or even forbidden for other subtypes. Attribute values are usually primitives (string, int) that can be mapped to DB column types easily. Developing these types of apps is really fast and concise as long as you do nothing fancy. As soon as you have any real amount of business logic and variance, it becomes cumbersome to maintain and hard to test.

API-centric applications

A similar school of thoughts comes out of microservice development. In many cases, CRUD services can be prototyped from autogenerated code defined through a Swagger/OpenAPI definition file. Sometimes there is little to do after this autogeneration, including persistence to the relational database. If you use a non-relational persistence like CouchDB, you may even get along with some variance and depth in the schema. Even a json or yaml file on disk gets you very far (at least in the prototype stage).

However, you will have scenarios where the same application object looks quite different between API requests. Users may have limited permissions on querying details of an object, different APIs with different use cases may have limited needs on the deeper details of objects. Retrieving your list of customer IDs and customer names for populating a dropdown is different from being able to view their billing data or contact persons’ details.

Domain Model centric approach

If your application has to deal with structured aggregates and consistency rules, exhibits rich behaviour, services multiple types of backends or is otherwise complicated, this might be the right approach.

At its core, the (micro)application is fairly ignorant of both persistence, export formats and external API.
The domain model deviates from persistence model or message representations. It has internal constraints and business rules. A domain entity is retrieved from repository in valid and complete state and all transformations will result in valid state. Transformations usually happen through defined operations rather than accessing a set of getters and setters. Setters may even not exist for many types of domain objects.

Through this, you can trust your objects by contract and reduce validations in the actual operations. This comes at a price: In a domain centric application, your classes may double or triple compared to other approaches.

You have your business object with all the internal integrity aspects. You have another version of the same object which is used for I/O – the so-called Data Transfer Object or DTO. There are versions where DTOs are either typed classes or untyped plain objects with just arbitrary public attributes. They may even just be structured arrays. Each option comes with a drawback. These world-readable objects expose all their relevant data to some kind of I/O – views rendered on the server side, files to export, REST API messages coming in or going out. A third version of your entity may be the persistence model(s), especially in relational databases. They may decompose your domain entity into multiple database tables, LDAP tree nodes or even multiple representations in no-sql databases. The main effort is getting all these transformations right. This type of application may seem verbose and repetitive. On the other hand, a strong object model with hard internal constraints and few dependencies on unrelated aspects is very straight-forward to unit test. This makes it suitable for large scale applications.

Compromises and practice

You do not have to commit to one approach with all consequences. This might even make less sense in languages which do not really enforce type hints (like python) or have no enforced concept of private and protected properties (again, like python). In many cases, you can get very far by discipline alone. Just restrict usage of setters to the persistence and I/O parts of your application. Add methods and behaviour to your persistence layer entities and even better, add an interface. Hint your business logic against the interface. You can also make your ORM mapper double as a Repository. You can scale out to real domain objects step by step as needed. Add conversion methods which turn a domain entity into one or many ORM entities and commit them to the DB. If your aggregate is not very suitable for this, hide your ORM mappers in a dedicated repository class. You can scale out as little or as much as your needs dictate.

On the other hand, you may abuse your ORM entities to populate views or data formats like XML, json or yaml.

Why we move away from putting logic into Rdo entities

Rdo is Rampage Data Objects, Horde’s minimalistic ORM solution. We have come a long way using the described tricks to make Horde_Rdo_Mappers work as repository implementations and Horde_Rdo_Base entities as business objects. However, we run into more and more situations where this is not appropriate. Using and lazy-loading relations to child objects is fine in the read use case, but persisting changes to such structures becomes tedious and error prone. Any logic tightly coupled to Rdo objects is also hard to unit test. Rdo-centric development does not mix well with the Shares library or with handling users and identities. As Rdo entities carry around references to the mapper and the database driver, they tend to bloat debug output. More fundamentally, our object models are evolving to a design which does not really look like database tables at all. We still love Rdo and may resort to Rdo entities with interfaces here and there, especially in prototyping. But our development model has long shifted towards unittest-early or unittest-driven rather.

Beware of the edges

Domain Driven Design purists may emphasize how domain models will rarely have attribute setters and some will even argue you should minimize your getters (tell don’t ask). However, at some point we need to get raw attributes to compose messages to the edges, to answer API requests or to persist any objects. There are multiple ways to do it, with different implications. Remember the Data Transfer Objects? I tend to have method on my domain aggregate to “eject” a fully detailed world readable object. In many cases, it is even typed. This chatty object can now be used for Formatters to transform them into API messages or data files. So the operation would be: Construct the domain aggregate – and fail if data is invalid – and from the domain aggregate, construct the DTO. Use the DTO for chatty jobs.

But how about the other way around? Messages from the outside can be malformed in many ways. They may be tampered with, they may miss details by design. If I exported an object to a user’s limited API and he sends an update, how to handle that? If an API user does not even know advanced attributes that are subject to other views and use cases, how would he create new entities?

I tend to have a method on the repository which accepts these messages and tries to turn them into Domain Objects. If the partial message contains some ID field or a sufficient set of data to form a unique key, I will first ask the backend to get the original details of the object and then apply the message details – and fail if this violates the constraits of my aggregate. If no previous version exists in the backend, missing information is added from defaults and generators, including a unique id of some sort. The message is applied on top.

The is approach may be expensive. I am creating full domain objects just to render database content to another format which may not even contain most of the retrieved data. On the other hand, if I know an object exists in a relational db, I could use some cheap SQL to apply partial changes without ever constructing the full model.

Creating straight-to-output repos with optimized queries is a tradeoff. They mean additional maintenance burden whenever the model changes, additional parts to cover with tests. I only do this when performance actually becomes an issue, not by default.

Creating straight-to-persistence methods for partial messages is even less desirable. It is circumventing integrity check at code level. However, sometimes the message itself is the artifact to persist – to be later processed by a queue or other asynchronous process.

bookmark_borderPHP 8 Horde (Maintaina)

Over the next few days, all Horde libraries and apps in the maintaina-com organization will be whitelisted for PHP 8x. in their FRAMEWORK_6_0 branch development versions. One next step will be a flavour of the OpenSUSE based containers and deployments which runs off PHP 8.0. While some few libraries have been enabled for PHP 8, it is almost certain that horde as a whole will not run correctly. Main culprits are the horde/rpc and horde/form packages and their user code, but there are some other ugly places that need attention.

Development Baseline at 7.4

Code in the maintaina-com repo will stay compatible with PHP 7.4 – at least for the time being. Decisions at Horde LLC may override that at some point or time may just march on. PHP 7.4 has been released two years ago, has ended active support 20 days ago and will be EOLed for upstream security support on November 28th 2022 – roughly 11 months to go. Linux distributions have a tradition to follow their own schedules and backport security fixes. OpenSUSE LEAP 15.3 ships with PHP 7.4 while openSUSE Tumbleweed has switched to PHP 8.0.13 – with PHP 8.1 versions becoming available from official repos soon.

This is a tough decision as PHP 8 and 8.1 have some really interesting features which would allow us to develop more elegant, more readable and more efficient code. For software that is not intended for this audience, I will immediately allow using 8.x-only features as soon as we are confident with Horde’s compatibility. This is going to be a major theme of January and possibly February.

No need to switch right now

If you are running Horde as of horde.org master branches or maintaina-com FRAMEWORK_6_0 branches off PHP 7.4, you should NOT switch right now. We will announce once we think any leftover issues are minor enough for an acceptable early adopter experience.

No particular love for 8.0.x

There is no guarantee our runtime will stay fixed at 8.0. PHP 8.1 offers a lot of new features and a considerable performance boost for some relevant scenarios. While making Maintaina Horde work with 8.x on a 7.4 feature baseline is the first step, the logical next step is upgrading feature baseline to 8.1 or higher. This will be much less of a problem if we get an official Horde 6 release in the meantime and users can choose between a properly conservative release version and a more adventurous Maintaina version. This is not something I have under control though. Horde LLC do as they find appropriate and sustainable and for many users, there is little reason to choose Maintaina over the official releases once we have a Horde 6 version that properly runs on recent PHP and supports Composer out of the box. I am perfectly fine with that and looking forward to it. I will always assist with a migration path as far as I can afford to.

Time is Money, Money buys Time

If you have an urgent commercial interest in a PHP 8-ready Horde version, you really do not want to rely on Maintaina’s timelines and priorities which may be subject to change. You will need to spend money. Approach somebody to do it for you, either Horde LLC or the company I work for, B1 Systems GmbH – both are formidable places to look for Horde-experienced development resources.

Update 2021-12-18 21:00 CET

I just ran the update to the metadata as a mass operation for everything which contains a .horde.yml file – the rest will have to wait until I stumble across it. I leveraged an edited version of horde/git-tools, some bash magic, some mass editing in vscode using their regex tool and some manual fixing.

  • All packages now formally require “php”: “^7.4 || ^8”
  • If horde-installer-plugin is required, I now go for “^2 || dev-FRAMEWORK_6_0” – however in maintaina-com/Core, I have a job that rebuilds composer.json on commit and this job showed me that the components tool needs an update in this aspect.
  • SPDX license code warnings for LGPL and GPL versions have been remedied to LGPL-2.0-only, LGPL-3.0-only, GPL-3.0-only each
  • Added the CI workflow where missing. Mostly it will fail until further editing. This is intentional.
  • I did NOT unify all versions of CI workflow as some deviations are intentional. I did however unify PHP versions for the unit tests to “7.4”, “8.0” and “latest” and I did unify phpunit versions to “9.5” and “latest”.
  • Unified/added the phpdoc workflow and the update-satis workflow as we had multiple versions for no good reason. I have settled for a version of the phpdoc job that will scan lib/, src/ and app/ if they exist
  • Cleaned up a lot of metadata mess in the Kolab related packages.
  • Removed some version: tags from composer.json files
  • Removed the optional pear dependency of imp for the ASN1 implementation from phpseclib – need to look for a proper composer-ready and less outdated replacement.

While the mass changes themselves seem to have gone right, the resulting avalanche of CI jobs showed some issues:

  • phpdoc job and update-satis job fail if they run in parallel and the satis repo content has changed since checkout. Either give the push commands in the loop a minute to wait each time or make the job smarter about handling these clashes. Still, failing is better than silently overwriting content
  • Having so many versions of the CI job is not maintainable. Need to factor out the boilerplate into an action, make version requirements a config variable with a builtin default and have some mechanism for there rare cases where extra software is needed for meaningful QA, i.e. database and storage related items.
  • After getting this migration done, upgrading the git-tools utility may be an interesting exercise in PHP 8 and PHPStan.
  • I may have created unnecessary conflicts with some open pull requests. Sorry, contributors. I will improve.

bookmark_borderHorde/Skeleton: Modernized

Over the last months, a lot of new technologies have entered the Horde ecosystem. It was long overdue to modernize the Skeleton example app.

No more un-namespaced code

Skeleton has completely migrated to PSR-4 namespaced code in /src/ rather than traditional, unnamespaced code in /lib/. This includes framework integration classes like Application, Api, Ajax\Application but also the portal blocks.

All application internal classes of skeleton are now served via the Composer Autoloader and follow the PSR-4 standard. This requires the very latest releases of horde/horde (6.0.0alpha6) and horde/core (v3.0.0alpha9) to work correctly. The only exception is the database schema migration which intentionally does not follow regular autoloading conventions.

No more index.php

Client pages that traditionally called into the application’s internal classes have been removed and replaced by routes. This includes the index.php file. A default route handles the skeleton/ and skeleton/index.php cases. The contents of the example UI have not been changed. They are only implemented differently

Using horde/routes and horde/http_server

The new skeleton uses horde/routes to describe available routes in the app and horde/http_server to implement the controller classes behind these routes. The code comes with extensive documentation comments. horde/http_server implements the PSR-15: HTTP Server Request Handlers standard used by most modern PHP Frameworks.

Inter-App API

Skeleton now includes an example of the inter-app API implemented through the registry. The same interface is used as the basis for json-rpc and XMLRPC APIs.

Full Backward Compatibility

The changed libraries still work with unnamespaced or partially converted apps. Implementers can work according to their own schedule. However, there are some rules to keep in mind:

  • The namespaced versions of Application, Api, Test and Ajax\Application take precedence over their unnamespaced counterparts. Implementers can leave the unnamespaced code as-is or turn it into wrappers like
    lib/Application.php
    
    <?php
    /**
     * Backward compatibility wrapper.
     *
     * @deprecated Call into Horde\Skeleton\Application directly instead. 
     */ 
    Skeleton_Application extends Horde\Skeleton\Application {}
    
    
    
  • Portal Blocks should NOT be wrapped or duplicated. They should exist as either namespaced or unnamespaced versions. You can have both types in the same application, but if you have a wrapped or copied block, it will show up twice.
  • Ajax application handler classes only have one integration point, /{lib, src}/Ajax/Application.php. You can upgrade them any time, just change the reference in the Application class. The Ajax Application class itself can be duplicated or wrapped, but the namespaced version will always be chosen. The wrapper would be just a transitional backwards compatibility measure so your application still works with earlier alpha versions of the framework.

Not in this release

The current version of skeleton still leaves room for improvement. Not all external libraries used in the code base are already namespaced or otherwise modernized. The PageOutput helper is still emitting output which needs to be caught and redirected to the output stream as part of the PSR-7 HTTP Response object. Future versions should use a stream-ready implementation to reduce boilerplate. Also, there should be some ready-made controllers for standard cases like UI output or REST.

References

bookmark_borderHorde/Log Rewrite goes PSR-3

I have rewritten Horde/Log based on the PSR-3 Logging standard published by PHP-FIG.

Why?

It had to been done at some point. The current wave of Corona pandemic has cancelled some joyful other activities planned for this weekend and I had been looking into PSR-3 loggers for quite some time. Most importantly, I wanted to do something else which needed a separeate logging facility and I was not ready to invest time into the various pitfalls of the old logger design. Just look at the Logger Factory – it is much too complex. Currently, there is no good balance between having too few logs in general or being flooded with mostly useless details of all the different aspects of horde. Filtering is essential, but the data cannot easily be divided into the contexts that are relevant to different problems and tasks.

Goals

My main goal was simple: Consuming code should be able to give the Logger more context about the messages it sends. This can be helpful to sort out what is interesting and what is just distracting noise. For example, a logger-aware library or application may send markers along with the actual message. All messages go to the same logger, but different log handlers may be set up to only care about certain aspects. Want to write all caldav sync errors for a specific user to a separate file? Want to keep a separate log of failed login attempts? Want to forward your time tracking application’s “project closed” log events to an external json-consuming system? Even though the old logger had all the necessary parts, it made these tasks too difficult.

As a product, the new logger is not very interesting outside the Horde context. While it can be used for logging in PSR-3 aware libraries, it still has too many dependencies on other parts of the Horde ecosystem. To reduce this, I may factor out the Constraints log filter into a separate library. The opposite case is more interesting: If PSR-3 replaces tight coupling to a custom logger, projects may use their existing logger. They have one less alien dependency to deal with. This might make some libraries more attractive, like Horde/Activesync or Horde/Imap.

From a code quality perspective, I also wanted to make the code more transparent to readers and tools. The old implementation relied on a __call magic method without really needing it, multiple parts relied on tersely documented array structures. The new implementation passes PHPStan Level 8. Coverage with parameter and return type hints is very high, with native property and return types following where possible. The current implementation is based on version 1.1.4 of the standard. When moving to a PHP 8 minimum requirement, this can be upgraded to the more strictly typed version 3.0 standard.

However, I am still missing unit tests against the new code. As it is substantially different from the H5 implementation, I could not easily adapt the existing test cases. This will require more work.

Also, integration of the new Logger into the core system is a separate task. Old and new logger infrastructure will have to coexist for some time. There is simply too much code that needs to be touched.

Architecture

The logger is architected as a modular system. The consuming code only has to deal with the Horde\Log\Logger. It implements the Psr\Log\LoggerInterface. The logger can support custom log levels not covered by RFC 5424. Log Levels are implemented as objects with a string name and a criticality number value.

The PSR-3 standard mandates log messages may be strings or any object that can be turned into a string. Internally, we convert them into LogMessage objects containing the string message, a reference to the LogLevel object and a hash of context attributes.

LogFilters are gatekeepers which look into a LogMessage and decide if it may be logged. They can be used as global LogFilters to suppress a log message altogether or as local filters which only affect a certain log handler. The Logger may include many different LogHandlers. These implement the actual processing of logs, writing them to a file, to a local syslog program or sending them over the network. Log messages may further be formatted for different needs. One handler may want to send XML documents to another server, another handler may store plaintext in a structured file. PSR-3 proposes a templating format where the logger can fill placeholders in the message with data from the context array. In Horde/Log, this job is done by a series of LogFormatters. Depending on configuration, a LogHandler can have zero, one or many such LogFormatters. Not all combinations make sense.

PHP 8 readiness

The new code is ready to run on PHP 7.4 and PHP 8. However, the horde/constraint and horde/thrift dependencies have not yet been upgraded for PHP 8, limiting usefulness. This will be done as time permits, with many other topics having higher priority.

References

bookmark_borderOctober Review: TOTP in Horde

I have been working on multiple things recently.

Kronolith Web UI: Appointment Cancellation Bug

Fix an annoying bug where internal user attendees get cancellation mails when an appointment is updated by the owner. This only seems to happen from the Web UI, not from CalDAV. I already analysed how this is happening. The fix is going to be a little bigger as I do not want to invest in the legacy infrastructure (socalled “Imples”) and use the opportunity to use a more modern approach. Work is in progress.

New Material UI based frontend for passwd.

I have worked with the team on a Material Design based UI. It uses ReactJs and Typescript and the new horde/http_server library and it is very different from existing Horde UIs. Do not expect it to blend well with the existing horde look&feel. The whole thing is a proof of concept and is an alien as the DIMP UI was back in Horde 3. This proof of concept still lives in a public feature branch and if you want to try it, you need to enable a new setting in the Preferences Screen.

Two-Factor support in Horde Base and a TOTP library

More and more online services start using two-factor authentication for improved security. Along with a password, users have to enter some passcode they read from a keychain fob device or from an app on their phones (like Google Authenticator).

I have started a new library horde/otp which implements TOTP and other styles of passcodes used as a secondary authentication factor. The library needs some additional glue code in horde/core and horde/base which still has to be built. I would have liked to finish this in October but there is only so much time.

Improved horde-installer-plugin

The composer plugin for Horde has received some refactoring and enhancements. The current feature branch offers a custom command in the composer CLI . This custom command rebuilds the relevant configuration files when you move your Horde installation after running the install/update commands. There are also some minor changes to the way configurations are written. End users should not notice.

DNS library

B1 Systems have finally opensourced a DNS library for the Horde ecosystem. It has been used internally for some years. The library can serve as the DNS building block of an IPAM system, but it also has an adapter to apply changes to the Amazon Route 53 service.

PHPStan support

Beginning this month, libraries and apps will gradually introduce the static analyzer tool phpstan. The tool will run as part of the CI pipeline and detect various types of code imperfections which potentially can mean hard-to-detect bugs. The findings will be addressed as time permits.

bookmark_borderDeveloper Introduction to Maintaina Horde

This introduction is targeted at developers with little or outdated prior Horde knowledge. Having worked with any Backend Development Framework in any language can help. While a lot of these facts can be found in Wikis or previous article, others are fairly new or only relevant to the developments in the version of horde delivered via https://horde-satis.maintain.com – This article will be continuously updated as I have time or relevant questions pop up.

Setup your work environment

A ready-made docker-compose deployment can be downloaded here: https://github.com/maintaina/deployments/
A basic root project for installing horde via composer can be found here: https://github.com/maintaina-com/horde-deployment/

Either way, you should be able to connect into a running sample installation on your local desktop within less than 5 minutes. Visual Studio Code and other editors can directly connect into the container content, so there is no need for fancy mounts etc. Mind each repo’s readme files for instructions.

Horde as viewed by the composer installer

Composer is the installer used for Horde and also does most of the autoloading.

A root project should have at the very least the paths /web/ for all web visible assets, /vendor/ for dependencies and /var/ for both configuration, logs and variable data which should NOT be web-visible.

Besides the default library type, horde comes with a composer plugin to support the types horde-application, horde-theme and horde-library. The /web/ and /vendor/ dirs are under the control of composer, do NOT put any custom content there. It will be deleted with each subsequent update. This is different from the older PEAR installer which would only overwrite files with newer package content, but not remove any custom files not included in the package.

A ready-made docker-compose deployment with some default configuration can be found here:

All horde-application packages are installed to the /web/ dir.
All library, horde-library, horde-theme packages will be installed to the /vendor/ dir.
If a package is a horde-library or a horde-application and has a js/ dir, its contents are symlinked into a structure below web/js/. If a package is a horde-application or a horde-theme, specific links under the /web/themes/ dir are created on install or update. The installer plugin will search for application config files under the /var/config path and link them to /web/$app/config/. The installer will also autogenerate some files under /var/config if they are missing.

Filesystem layout

All Horde applications share the same filesystem layout.

  • /app/controllers/ contains old-style Horde_Controller request handlers.
  • /bin contains commandline scripts or cron jobs, usually prepended by the application name and without the .php suffix – these will automatically linked to the /vendor/bin path unless otherwise declared.
  • /config contains actual defaults files from the package as well as symlinks to user-provided or autogenerated configuration items. The routes.php file goes here.
  • /doc dir contains the license file, changelog.yml file, optionally a CHANGES file and any documentation in RST format. Some documentation can be autogenerated from the horde.org wiki.
  • /js contains ready-to-run Javascript code. Minifying is not necessary as it can be done by horde.
  • /lib dir contains PHP code which is usually unnamespaced and follows the PSR-0 autoloading standard.
  • /locale contains machine-readable PO translation files and the sources from which they are generated.
  • /migration contains code for automated buildup, upgrade and teardown of SQL database schema.
  • /scripts contains upgrade scripts and non-php content like LDIF files or apache config snippets.
  • /src dir contains PHP code following PSR-4 Namespaced Autoloading and PSR-12 Coding Standards.
  • /templates contains PHP and HTML templates mostly for backend-rendered content
  • /test contains unit tests and integration tests built with the phpunit framework.
  • /themes contains css and image files. See separate section on themes.
  • The top dir contains a composer.json manifest, a PEAR package.xml manifest, possibly some README.md and other control files. Traditionally, it housed all the entry points through which the browser would call into PHP code and get server-rendered pages as a result. This is no longer the recommended pattern.

Most of your newly developed code should live somewhere under /src. Libraries follow the same layout.

Web Layout

The general rule is you cannot hardcode any web-visible paths. All assumptions will eventually be wrong.
In a default installation as generated by the installer, the horde base app will be under /horde/ and the login screen will be under /horde/login.php etc. Other applications will be on the same level besides the /horde base app – /turba for the addressbook, /passwd for the password utility etc. Javascript will be visible under /js and themes under /themes – this is, unless you have configured something else. The webroot could be /tools/foo/bar rather than /.
The webmail application could reside under https://imp.foo.org/ rather than https://foo.org/imp. Themes could be presented from a totally different subdomain. The horde backend knows how to autogenerate the appropriate URLs. Your frontend should not make any assumptions.

Translation

Use the horde-translation tool to maintain translations. It generates binary .po files from a human-readable format. PHP can use these files to translate English text to your chosen frontend language. You can expose your frontend language under language-independent keys filled with the appropriate translations. You should NOT maintain translations in your frontend code as translation files can be customized by the administrator.

Themes

Horde can apply themes at runtime. Theme CSS is applied in the following order:

  • Horde global default CSS
  • Horde global custom theme css
  • App-specific default CSS
  • App-specific custom theme css (if present)

This mechanism will break if you use a build process that prepends CSS definitions with build-specific prefixes.
User-provided themes have the composer type horde-theme. The installer will link them to the web/themes folder.

Caching and minifying Javascript

Horde has a runtime javascript bundler and minifier which will bundle all javascript marked for delivery into one file and minify it. This file will be cached into the web-visible static dir. On version update, the file name will change. For this to work, you need to tell the framework which javascript files to include rather than just dish out a static html file with hardcoded paths to the individual JS files.

TODO code example

Caching and minifying themes

Horde has a runtime css minifier and bundler. All CSS will be bundled into a file and minified. This file will be cached in the web-visible static dir. If the user selects another theme, horde will minify and cache a different collection of files. For this to work, you should not hardcode paths to CSS file paths in your HTML templates or frontend code.

TODO code example

The Registry

With this amount of configurability, you need a source of truth about what goes where. This is the Horde Registry.
The registry has an index of all applications, their base web and filesystem location, which RPC and Inter-App APIs they expose, how they are represented in the topbar menu and where their Javascript and Themes resources are available in the local filesystem and as viewed from the web.

Registry Configuration

Querying the Registry

Routes, Middlewares and Handlers

Each application may have a config/routes.php file which contains web-accessible routes relative to this application and.

use Horde\Core\Middleware\AuthHordeSession;
use Horde\Core\Middleware\RedirectToLogin;
use Horde\Passwd\Middleware\RenderReactApp;
use Horde\Core\Middleware\ReturnSessionToken;
use Horde\Core\Middleware\DemandAuthenticatedUser;
use Horde\Core\Middleware\DemandSessionToken;

use Horde\Passwd\Handler\ReactInit;
use Horde\Passwd\Handler\Api\ChangePassword;

$mapper->connect(
    'Api',
    '/api/changepw',
    [
        'controller' => ChangePassword::class,
        'stack' => [
            AuthHordeSession::class,
            DemandAuthenticatedUser::class,
            // DemandSessionToken::class,
        ],
    ]
);

$mapper->connect(
    'ReactInit',
    '/react',
    [
        'controller' => ReactInit::class,
        'stack' => [
            AuthHordeSession::class,
            RedirectToLogin::class,
        ]
    ]
);

This example from a development version of the passwd app shows two routes: /react would hand out a browser page with all the boilerplate to load a Single Page Application UI (SPA). /api/changepw would receive a JSON message and, if the request is valid, change the user’s password. Each route definition contains of a name, a URL pattern and a third parameter defining the controller, the middleware stack or constraints like only processing the route for POST request. A route can also contain placeholders that can expand into variables and defaults for optional parts. Routes are processed in a first-hit-wins strategy so place your specific route definitions before the general cases. All routes are interpreted as relative to the application’s webroot.

The controller can be any string representing a class which is either a traditional Horde_Controller, a PSR-15 Request Handler or a PSR-15 Middleware. The only requirement is that the Injector needs to know how to produce it, either via Autowiring or via some explicit Binder (Factory, Implementation, Annotation …).

The Stack is either an array of strings representing PSR-15 Middlewares or a string recognized by the injector which will produce an iterable list of Middlewares. If there is no stack parameter, a default stack will be applied: Only allow requests for authenticated users identified by session cookie, forward everybody else to the login page. If you do not want any middlewares before your controller, explicitly set stack to an empty array. The middleware stack will be executed top to bottom before the controller is called and the request can be modified in that phase. If any middleware decides to answer your request by itself, no further middlewares will be called and the controller will not be executed. Responses travel back upward through the middleware stack and can be modified during that phase.

Lifecycle of a browser session

A typical user session would go like this:

  • A user navigates to webroot or any application route
  • As his browser provides no valid session cookie, he is redirected to the login page
  • The user enters credentials into the login form and posts a request. This will create a valid session cookie
  • The user is forwarded to his desired route or a default route
  • The backend sends all the HTML, CSS, graphics and javascript required to show the requested page. Javascript and CSS are each minified and bundled into cache files. Also, the framework inserts a javascript variable with dynamic data like the location of the API endpoints, chosen UI language etc.
  • User interactions either trigger ajax requests to API routes or forward him to a location outside the Single Page Application (i.e. another horde app or someplace outside). For ajax requests which change backend content, another credential besides the cookie is needed, for example a custom header with the session key or a special write token.
  • Eventually, the user logs out and his session will be invalidated

Permissions system

Administrator users

Shares

Preferences

Configuration System

Virtual Host specific configuration

Inter-App API and RPC

Special classes

ORM Layer horde/rdo

bookmark_borderCustom iCalendar data in Kronolith & Nag

The iCalendar exchange format is everywhere in Horde’s calendar (kronolith) and tasks (nag) apps. It is offered for manual import/export. It is the centerpiece of the CalDAV synchronisation protocol and various APIs of these apps. The format also plays a role in email invitiations and updates sent via iTip. It is a very powerful and well-accepted format. This is, unfortunately, a little bit of a problem. Fortunately, we also have a solution

The iCalendar format was originally defined by the IETF. They released RFC 2445 back in 1998. While Horde still understands that format, a newer version 2.0 was defined in RFC 5545 in 2009. An extension to the 2.0 standard was released in 2016: RFC 7986 defines additional attributes needed to describe related conferencing software resources or other technical aspects.

Server and client software has always been free to add custom attributes to the different components of the standard. For example, Horde Kronolith stores and reads a special attribute X-HORDE-ATTENDEE to mark event attendees who have a horde user attached. Mozilla uses attributes like X-MOZ-LASTACK and MS Exchange comes with a long list of custom attributes, among them X-CALEND. Understanding these attributes when processing a icalendar file is optional. However, servers are expected not to silently drop attributes they do not understand.

Up until now, both Kronolith and Nag did support a wide range of attributes, but far from all. Events are reconstructed from calendar backend contents. Unsupported data simply gets lost. As it is technically impossible to know all attributes anybody out there may use, the opposite definition was needed: An attribute is an “other” attribute if it is not in the list of attributes the app already handles. Both apps got a catalog of attributes they understand and a property for storing everything else. Support for actually storing and retrieving this data is currently limited to the SQL backend. I do not currently invest time in Kolab support, maybe I will never do that.

Also, while I took care to ensure that normal editing via the UI will not break the “other” data, I did not cross check with scenarios where ActiveSync is involved. Please report bugs as you find them.

CalDAV event Storage

For kronolith, an optional storage of unmodified caldav events as received has been added during development. I am not sure if I will keep this or remove it again. It is useful for debugging various support scenarios but under normal operation, it is not required. It may be useful for creating a cache of external caldav calendars though.

Extend to Turba Addressbook

Turba Addressbook uses a similar data format vcard and implements the sister protocol CardDAV. Things are indeed a little more complicated. There are three relevant versions of vcard 2.1, 3.0 and 4.0 and multiple extension RFCs with additional contact attributes exist. Worse, there is a host of X- attributes from popular desktop addressbook vendors, Evolution, Kontact, Thunderbird, Android, MS Exchange/Outlook… There are at least two different ways to represent contact groups and the whole thing is a little messy. But the real problem lies with Turba’s highly configurable nature. At least two backends need to be upgraded: Both SQL storage and LDAP play a key role in typical Turba use cases. The exact layout of each addressbook can be different even inside the same installation. The list of vcard attributes considered as natively supported needs to be calculated for each addressbook and additional attributes maybe need to be stored outside of the specific backend. This will take some time and consideration.

Still, Turba’s tendency to forget what it does not care for needs to be addressed. Otherwise users risk losing attributes on sync.

bookmark_borderMaintaina Horde switches to openSUSE LEAP

Our Horde docker images have switched over from Tumbleweed to openSUSE LEAP once again.

Recently our container build CI job in github.com broke down unexpectedly. An investigation showed that Tumbleweed’s core libraries, especially libc, were too new for the CI’s build system, based on Ubuntu LTS.

This is the second time we abandoned the Tumbleweed basis for Horde docker containers. OpenSUSE Leap 15.3 uses a relatively old, but well-maintained, set of base libraries. Both Leap and Tumbleweed deliver PHP 7.4 as a basis for Horde. In both systems, we skip the packaged composer version for a static pick which we will update from time to time. We may switch over to packaged composer if we feel confident.

For users and administrators of the image, both Tumbleweed and Leap 15.3 should feel more or less the same. For end users of the delivered horde setup, there should not be any downsides. We will switch back to the Tumbleweed image in a while when we have picked a more recent version of Ubuntu.

bookmark_borderCardDAV vs MacOS Contacts

In case you run Horde, NextCloud or other CalDAV/CardDAV server products, sooner or later you will encounter users who want to use the MacOS Addressbook application, “Contacts”, to access their server contacts. As of MacOS 11.5.2, the apple contacts app only supports one addressbook per principal. It will pick the first addressbook and use it for reading and writing. Unfortunately, the first addressbook exposed by Horde’s carddav system often is the favourite recipients addressbook. This is readonly.

There is no real solution to this yet. You can try to trick your carddav server into having the “right” addressbook as the first in the list but this is about it.

If anybody has an idea how to tackle this limitation without breaking carddav for all the other clients, please let me know.

bookmark_borderCan Horde’s internal API use PSR-7 Messages?

The Horde Inter-App system has been around since Horde 3.

Horde Inter-App messages are addressed by a two part string. The first part, followed by a slash character, is called the API. The second part after the slash is called the method. Registry can delegate complete APIs to an application or a list of complete API/method strings. In the latter format, a certain API/method combination can be assigned to one app even if the API in general is assigned to another app. The API is implemented by a class $application_Api in file Api.php inside each horde application. That application class methods and their signatures are the methods exposed by the Inter-App API. There are some meta arrays controlling further details but let’s ignore them. All but a few APIs only take arrays and primitives (string, number, bool) as parameters and issue them as return types. This is because the RPC layer eventually receives and emits HTTP messages, which are just text. Only those Inter-App API methods which are meant to be strictly internal will consume and emit PHP Objects.

PSR-7 messages and PSR-15 handlers/middlewares are an interop standard. They do not make a lot of assumptions about the underlying implementation. They have been used to implement REST solutions as well as old style server-driven dynamic websites. The request objects contain an URI to a resource and will eventually result in a response object. In between there is usually a broker piece called a router, which analysis URI and other request parameter to assign it to the proper implementation code or chain of code pieces, called middlewares. Anywhere in that chain, an answer is created and sent back to the caller. The request and response bodies are streams of text or binary data, as the request and response are essentially text messages.

At first glance, this is an easy match. The Registry mediates between the message sent by the caller and the code which handles it. We could call it the router. The API class with its methods could be seen as a set of handlers. The PSR-7 ServerRequest object represents a HTTP request, but it also allows arbitrary attributes attached to the actual request data. These attributes may be any PHP value, including objects.

There are some details to keep in mind though.

The inter-app API has little definition on a data contract. It predates PHP method parameter types and return types. In traditional code, users could feed just about anything into the inter-app API and the implementation would need to guard against any value expected or unexpected. Inter-App API just assumes the caller is eligible to call. Authentication is delegated to the existing PHP session or to the RPC setup, authorization control must happen in the called code. That may lead to bloated, repetitive code in the implementation.

As each app has only one API class file, it is not currently possible to implement two different methods on different APIs if the same app handles it. If you have two different APIs clients/get and contracts/get and both are implemented in the same app, they will end up in the same code path. The way around it is with calls like clients/getClients and contracts/getContracts, but this is just ugly.

The rampage/routes/http_server stack can easily discern a GET /clients/ call and a GET /contracts call, but it only works inside a specific app. Setting up a separate API set of routes, we can easily have calls abstracted from the implementing app. The system of handlers and middlewares allows to delegate authentication and authorization checks outside of the actual implementation of an endpoint. This reduces repetitive boilerplate. One big issue remains. As of now, Inter-App can return native PHP objects to the caller. PSR-7 messages allow Attributes on the ServerRequestInterface but there is no equivalent in the ResponseInterface. Inter-App can carry objects (for internal calls) or serialisation-friendly nested arrays until it hits the RPC layer. This layer will turn it into a text structure, say XML or JSON. How would we do that in ResponseInterface implementations? How would that make the implementation reusable for a REST interface, app-internal AJAX or other code?

A vision of convergence

Bringing together new capabilities and existing system participants is tricky. A new RPC and Inter-App system should integrate with the old interface, it should not just stand beside it. Having two different Inter-App layers would be confusing, abandoning the old one right now would be unnecessary stress on developers’ time budgets.

As an inter-app user, I want to use $registry->call(‘method’, [params1, param2]) or $registry->api->method($param1, $param2…) as I did before.

As an RPC user, I want to call \Horde_Rpc::request('api/method', $params, $options) as I did before. I do not care what happens in the background.

As an application developer exposing an API, I do not want to give up Api.php right now, it has to work with the new stack as good or bad as it did before.

As a developer of new apps and features, I want to leverage extended capabilities. I want to be able to implement two distinct APIs using the same method names. I want to be able to return native objects, even serialisation-unfriendly ones with php resources, along with a serialisation-friendly message to use in RPC, Rest or other HTTP use cases. I do not want to be restricted to two levels of API/method. I want to re-use middleware I already built for the frontend AJAY.

As a distributed app developer, I want to define API resources and have them served either internally or by external microservices transparantly over http requests.

For the future, I would like some degree of introspection and possibly some guidance on allowed or required request parameters.

As an integrator, I want to securely communicate with only partially set up horde instances to finish or upgrade setups by firing HTTP requests.

Implementation approach

The horde-deployment project includes a route from webroot/api/ to a global API router managed by the horde base app. This API router is first populated by the Registry and then supplemented by a config/routes.api.php file in each registry app.

Regardless of the calling context, a cascade of middlewares sets attributes for the called API/route, the parameters, the resolved implementation and the outcome of already happened authentication checks. The implementation is either an adapter middleware calling an app’s Api class or an actual implementation middleware/stack. It will write the return values and other state into an attribute digested by the bottom of stack. In case of Inter-App, a token response is returned and the actual data structures are taken from the handler and returned to the caller. Real RPC backends generate appropriate headers and stream body for response. The response can possibly be processed further as it returns back to top of stack, for example gzip compressed or logged or trigger metrics updates.