bookmark_borderCardDAV vs MacOS Contacts

In case you run Horde, NextCloud or other CalDAV/CardDAV server products, sooner or later you will encounter users who want to use the MacOS Addressbook application, “Contacts”, to access their server contacts. As of MacOS 11.5.2, the apple contacts app only supports one addressbook per principal. It will pick the first addressbook and use it for reading and writing. Unfortunately, the first addressbook exposed by Horde’s carddav system often is the favourite recipients addressbook. This is readonly.

There is no real solution to this yet. You can try to trick your carddav server into having the “right” addressbook as the first in the list but this is about it.

If anybody has an idea how to tackle this limitation without breaking carddav for all the other clients, please let me know.

bookmark_borderCan Horde’s internal API use PSR-7 Messages?

The Horde Inter-App system has been around since Horde 3.

Horde Inter-App messages are addressed by a two part string. The first part, followed by a slash character, is called the API. The second part after the slash is called the method. Registry can delegate complete APIs to an application or a list of complete API/method strings. In the latter format, a certain API/method combination can be assigned to one app even if the API in general is assigned to another app. The API is implemented by a class $application_Api in file Api.php inside each horde application. That application class methods and their signatures are the methods exposed by the Inter-App API. There are some meta arrays controlling further details but let’s ignore them. All but a few APIs only take arrays and primitives (string, number, bool) as parameters and issue them as return types. This is because the RPC layer eventually receives and emits HTTP messages, which are just text. Only those Inter-App API methods which are meant to be strictly internal will consume and emit PHP Objects.

PSR-7 messages and PSR-15 handlers/middlewares are an interop standard. They do not make a lot of assumptions about the underlying implementation. They have been used to implement REST solutions as well as old style server-driven dynamic websites. The request objects contain an URI to a resource and will eventually result in a response object. In between there is usually a broker piece called a router, which analysis URI and other request parameter to assign it to the proper implementation code or chain of code pieces, called middlewares. Anywhere in that chain, an answer is created and sent back to the caller. The request and response bodies are streams of text or binary data, as the request and response are essentially text messages.

At first glance, this is an easy match. The Registry mediates between the message sent by the caller and the code which handles it. We could call it the router. The API class with its methods could be seen as a set of handlers. The PSR-7 ServerRequest object represents a HTTP request, but it also allows arbitrary attributes attached to the actual request data. These attributes may be any PHP value, including objects.

There are some details to keep in mind though.

The inter-app API has little definition on a data contract. It predates PHP method parameter types and return types. In traditional code, users could feed just about anything into the inter-app API and the implementation would need to guard against any value expected or unexpected. Inter-App API just assumes the caller is eligible to call. Authentication is delegated to the existing PHP session or to the RPC setup, authorization control must happen in the called code. That may lead to bloated, repetitive code in the implementation.

As each app has only one API class file, it is not currently possible to implement two different methods on different APIs if the same app handles it. If you have two different APIs clients/get and contracts/get and both are implemented in the same app, they will end up in the same code path. The way around it is with calls like clients/getClients and contracts/getContracts, but this is just ugly.

The rampage/routes/http_server stack can easily discern a GET /clients/ call and a GET /contracts call, but it only works inside a specific app. Setting up a separate API set of routes, we can easily have calls abstracted from the implementing app. The system of handlers and middlewares allows to delegate authentication and authorization checks outside of the actual implementation of an endpoint. This reduces repetitive boilerplate. One big issue remains. As of now, Inter-App can return native PHP objects to the caller. PSR-7 messages allow Attributes on the ServerRequestInterface but there is no equivalent in the ResponseInterface. Inter-App can carry objects (for internal calls) or serialisation-friendly nested arrays until it hits the RPC layer. This layer will turn it into a text structure, say XML or JSON. How would we do that in ResponseInterface implementations? How would that make the implementation reusable for a REST interface, app-internal AJAX or other code?

A vision of convergence

Bringing together new capabilities and existing system participants is tricky. A new RPC and Inter-App system should integrate with the old interface, it should not just stand beside it. Having two different Inter-App layers would be confusing, abandoning the old one right now would be unnecessary stress on developers’ time budgets.

As an inter-app user, I want to use $registry->call(‘method’, [params1, param2]) or $registry->api->method($param1, $param2…) as I did before.

As an RPC user, I want to call \Horde_Rpc::request('api/method', $params, $options) as I did before. I do not care what happens in the background.

As an application developer exposing an API, I do not want to give up Api.php right now, it has to work with the new stack as good or bad as it did before.

As a developer of new apps and features, I want to leverage extended capabilities. I want to be able to implement two distinct APIs using the same method names. I want to be able to return native objects, even serialisation-unfriendly ones with php resources, along with a serialisation-friendly message to use in RPC, Rest or other HTTP use cases. I do not want to be restricted to two levels of API/method. I want to re-use middleware I already built for the frontend AJAY.

As a distributed app developer, I want to define API resources and have them served either internally or by external microservices transparantly over http requests.

For the future, I would like some degree of introspection and possibly some guidance on allowed or required request parameters.

As an integrator, I want to securely communicate with only partially set up horde instances to finish or upgrade setups by firing HTTP requests.

Implementation approach

The horde-deployment project includes a route from webroot/api/ to a global API router managed by the horde base app. This API router is first populated by the Registry and then supplemented by a config/routes.api.php file in each registry app.

Regardless of the calling context, a cascade of middlewares sets attributes for the called API/route, the parameters, the resolved implementation and the outcome of already happened authentication checks. The implementation is either an adapter middleware calling an app’s Api class or an actual implementation middleware/stack. It will write the return values and other state into an attribute digested by the bottom of stack. In case of Inter-App, a token response is returned and the actual data structures are taken from the handler and returned to the caller. Real RPC backends generate appropriate headers and stream body for response. The response can possibly be processed further as it returns back to top of stack, for example gzip compressed or logged or trigger metrics updates.

bookmark_borderAuthentication & Authorization is complex

Could there be any more straight forward topic than authentication & authorization? The user provides user name and password and clicks “login”, the backend checks if credentials are valid. Invalid credentials are not authorized, valid credentials are authorized and identified (authenticated). End of story. Right? Well… in many cases, it’s not that trivial.

As a user, I want to be informed if I have to change my password soon.

As a security officer, I want accounts blocked for some time after a certain amount of failed login attempts. I also want passwords to expire after a certain time. I also want login sessions to expire if client IP address or browser identity changes.

As an integrator, I want to enhance the system to digest certificates, Shibboleth, SAML or OpenID Connect, Bearer Tokens, JWTs or even Kerberos Tickets.

As a support person, I want to silently normalize user login names, lowercase them or append domain names to login names.

As a usability consultant, I want to leverage the user database for UI, make user names searchable and browseable.

As a site administrator, I want to be able to filter out certain or most users from the backend even if they provide valid credentials or block login for all but a few users, i.e. for site maintenance.

As an innovator I want to join user bases from two different authentication sources and possibly migrate them on next login seamlessly, without them noticing.

As a sales person, I want to allow a limited guest user experience prior to login rather than force everbody to the login screen.

As a returning user, I want to transparently log in through a remember me cookie, but maybe make the application aware of that limited trust, asking for real login for sensitive operations.

As a developer, I want to be flexible and allow any combination of criteria. Users may login with a global password or a purpose-limited token, I want them to use a second factor like TOTP if they have set up one but pass if they haven’t, unless I don’t allow it.

For business reasons, I want to rate limit API access per hour, per day and per month with individual thresholds each.

As an auditor, I want each request’s authorization process to be logged for evidence.

There is a lot more to consider, but I will stop here.

Authentication is any means of making sure of a requester’s identity. The most common practice in computers asking for a username (identity) and a password (proof). Another common practice is asking some external authority we trust, an Identity Provider. When we look at a person’s passport to compare his photo or fingerprint with his actual face or finger, this involves the same aspect: The name written on the passport for identity. The photo or fingerprint for proof. And, implicitly, we trust the party who created the passport (Identity Provider) and maybe have some means to check the integrity of the document. But if somebody has no passport but a driver’s license, a club membership card, we might instead use this for verification.

Transparent Authentication is a special case where we can identify the user without explicitly interacting with him. This can be achieved whenever the request carries credible identifying information like a pre-established cookie, a certificate, a passport token whose integrity can be validated or other methods.

Authorization is any decision making if a requester has access to a resource. Simply being authenticated might not be enough. Guests without authentication may be eligible for certain areas of your application, but maybe not if they are from a certain IP range or country. A person may be too young or too old to use a certain facility. A person may be old enough to buy alcohol but cannot currently present a sufficient document to proof this. On the other hand, the bearer of a ticket may be eligible to visit some concert, with no interest in his actual identity. A software user may need to both be authenticated and part of a certain privilege group “administrators” to access a configuration screen.

Both requirements can be linked to each other, as well as all those aspects mentioned above.

In another article I will look at how Horde does it and discuss if this approach is still right for modern use cases.

bookmark_borderCardDAV: What is Turba’s true data model?

Turba Addressbook imports, exports and syncs to and from many formats. I discussed all the formats in a previous article.

You can even customize the addressbooks, make them contain extra fields or omit common fields. Saving and loading data from the addressbook backend is in some way one more such conversion. Turba can actually read and write to SQL and LDAP backends, to the Preferences System and to an outdated version of the Kolab Groupware Server and IMSP Servers. It can derive Addressbooks from Horde Groups, Favourite Recipients (usually from IMP) and stored search queries (virtual addressbooks).

This diversity of backends and configuration options comes at a cost. It is sometimes not clear what is the data model and who owns the data. This complicates synchronisation scenarios like CardDAV.

Let’s look at this in detail. LDAP backends might be considered the owner of data and Turba would be just a view to that data or a client with edit permissions. LDAP would be able to change data, add new contacts, delete contacts or change contact details without turba even knowing. This is less of a problem with the SQL driver as the SQL schema of Turba should never be consumed directly, rather through turba’s API or exchange formats. Groups are a primary culprit of not telling Turba any changes.

On the other hand, sync protocols may contain data the current turba configuration or backend does not recognize. CardDAV and its data format vCard may contain a wealth of properties and allow custom extensions fields. CardDAV servers are supposed not to simply drop/forget fields they do not recognize because that could make the client forget fields in the data it actually supports. On the other hand, a client may send an update of a contact missing a property which is stored in a previous version of the contact. We need to understand if the client simply cannot handle that property or if it is supposed to remove that property. Well-behaved clients should not strip properties they do not understand and well-behaved servers should not do that either.

Turba may support contact information which cannot be mapped to CardDAV/vCard but I am currently unaware of any. As vCard allows custom fields, at the moment I consider vCard the lingua franca.

Another tough scenario are references to other contacts or resources. CardDAV/vCard support multiple ways to indicate these and the backend may support a completely different model of referencing. How should we react on references to contacts which we do not (yet) have? Many of these problems also affect CalDAV sync in the kronolith calendar and nag tasks app.

I am ignoring ActiveSync and SyncML because I do no longer possess any syncml capable device and I have limited understanding of the ActiveSync code. It seems to have a static mapping between turba internal field names and ActiveSync field names.

To resolve all these issues, Turba needs to have its own data model and it needs to store synced data its backend is not interested in. It must not lose or forget any information when syncing back to the client. This is currently not the case.

bookmark_borderHorde’s HTTP component goes PSR

This weekend, I gave the horde/http component a some major redesign. See how things escalated. Oh my.
My minimum goals were namespacing, PSR-4 (Revised Autoloading Standard) and some minor, schematic adjustments. The final result is quite different. I ended up implementing PSR-7 (HTTP Message Interface), PSR-17 (HTTP Factories) and PSR-18 (HTTP Client). The code largely complies with PSR-12 (Extended Coding Style Guide) and thus, implicitly, PSR-1 (Basic Coding Standard). I am sure you will find some deviations and issues, so I welcome any Pull Requests against my repo. You will find the new code in /src/. The original, incompatible implementation is untouched and resides in /lib/. They can coexist as they are so different (namespaced vs unnamespaced, among others).

This is not a total rewrite. I could leverage most of the existing code base with some tweaks. This work would not exist without the foundation by Chuck Hagenbuch and all the contributions from the different Horde maintainers over the years. You will also notice similarities to other php PSR-7/18 implementations out there. I checked out Guzzle, httplug/php-http and some others. It was a great learning experience and I will not pretend I am not influenced by it.

As with all my modernisation activities, I made use of features allowed by PHP 7.4. This excludes Constructor Property Promotion and, sadly, Union Types as both are PHP 8. Union Types have been relegated to phpdoc annotations or check methods. Please mind most of the PSR’s target compatibility with PHP releases older than PHP 7.4 and thus do not sport return types or scalar type hints. I followed these signatures where applicable.

One major change between the old codebase and the new one is clients and Request/Response classes.
In the old implementation, there would be one client but different Request/Response implementations using different backing technologies like pecl/http, fopen or curl. The new implementation moves transport code to clients implementing PSR-18. Optionally, they can be wrapped by a Horde\Http\HordeClientWrapper which exposes the PSR-18 itself, but otherwise mimics the old Horde_Http_Client class.

Horde/http is used by very different parts of Horde, including the horde/dav adapter to SabreDAV, various service integrations (Gravatar, Twitter, …), the horde/feed library and application code all over the place. I intend to upgrade those use cases to the new implementation. I am looking forward to criticism or acceptance of that approach.

The goal of this project is more far-reaching. While Horde 4 and Horde 5 already had horde/controller, they made very limited use of it. In my non-public projects, I relied heavily on controllers and I made several attempts at improving the way controllers are set up in horde/core. However, I always felt the results were clunky and not really what I wished to achieve. While horde/controller knows prefilters and postfilters, these are not easy to use and there are few examples. While doing research, I made up my mind. I want to replace Controller/Prefilter/Postfilter with their PSR-15 (HTTP Handlers and Middleware) equivalents. Controllers will be Handlers, Pre/Postfilters will be Middlewares. Together they will be stacks. Authentication, Authorization, Logging etc will be relegated to Middlewares. There will be a default stack to mimic the default controller behaviour in Horde 5 (Be authenticated or be relegated to the login page). You will be able to define application-specific default stacks or request-specific stacks. As Middlewares are a public standard, we might be able to leverage middlewares existing out in the wild or attract microframework users to some horde built middlewares. I want to make it easier for coders to build horde apps without relearning everything they needed to learn for laravel, laminas or symfony. I also want to make it easier for everyone to cooperate. Horde is among the oldest framework vendors, predating most of PEAR and Zend. I think we still have some bits to offer.

Missing Bits:

  • While I did some implementation of UploadedFileInterface, it is still quite basic
  • UploadedFileFactoryInterface is missing as I have not yet built the server side use cases
  • Unit Tests need to be adapted to the new code base. Is there some PSR Acid Test out there?
  • I began implementing the ext-http (PECL_HTTP) backend but stopped as I am unsure about it. That extension is in version 4 now and still services version 3, but we have backends for versions 1 and 2. I need to learn more about it and decice if it makes sense to invest into that aspect.

bookmark_borderHorde/Rdo ORM: PSR-4 and BC Breaks

Summary: Horde/Rdo ORM got upgraded for Namespaces. User code conversion is straight-forward. Backward Compatibility is limited.

If you ever wondered, RDO stands for Rampage Data Objects. This has been on my list for quite long, but it took some time to get it right. The horde/rdo library is horde’s Object Relational Mapping (ORM) solution. It allows you to store objects into sql databases or retrieve them without writing SQL. Originally, it has been written by Chuck Hagenbuch way back in the PHP 4 days and I have been a heavy user for years. If you know Laravel’s Eloquent, Doctrine, Hibernate, ActiveRecord, nHydrate or Dapper, this one is Chuck’s “as light as possible” implementation of the concept. Colleagues from B1 Systems have been users and contributors over the years. However, it has long been time to rethink Rdo in the light of newer capabilities of PHP 7.4 or even PHP 8. This time is now.

But you should not fear upgrading. First, the library still keeps the unnamespaced PSR-0 code, at least for the time being. Second, there is a straight-forward upgrade path for existing users.

Horde_Rdo_Base -> Horde\Rdo\Base
Horde_Rdo_Mapper -> Horde\Rdo\BaseMapper
Horde_Rdo_Factory -> Horde\Rdo\Factory
Horde_Rdo_List -> Horde\Rdo\DefaultList
Horde_Rdo_Iterator -> Horde\Rdo\DefaultIterator
Horde_Rdo_Query -> Horde\Rdo\DefaultQuery
Horde_Rdo_Exception -> Horde\Rdo\RdoException
Horde_Rdo:: -> Horde\Rdo\Constants::

It’s about as easy as it looks. Converting an application took me a few minutes. You might have noticed the names do not exactly match. Some names were not practical to simply turn into namespaced classes. In other cases, I turned class names into interface names. I found myself implementing the same enhancements over and over in multiple projects and I found myself wishing there was an easy way to do some others.

Less Boilerplate Mappers and Entities

Rdo is much more fun than some other ORMs as it comes with very little configuration. The library autodetects properties from the table columns. Datetime fields are automatically cast to Horde_Date objects. By default, Rdo tries for a convention over configuration approach for mapping table names, mappers, entities etc. Unfortunately, for most of my projects, that default does not fit too well to the class structure and file layout I choose. But still, implementing a new pair of mappers and entities takes two files and only two or three settings I ever need to think about.

Most often, subclassing the base mapper and the base entity is the right thing to do. But sometimes, you do not really care. If all you ever do to your entity is call ->toArray() and serialize it to json, you would be served very well by a generic entity instead. This is something on my list. I would even go one step further: If all you are changing to a mapper is subclassing and telling it the name of its database table and entity class, why subclass at all? Yes, I would want to turn the optional Factory class into something smarter. It will give you your mapper, be it an instance of the generic mapper with the right table name or something very customized.

Custom List Objects

Rdo queries always return a Horde\Rdo\List. This is a good default implementation and it makes common tasks easy. However, there are situations where you want your list of entities to be specific to an entity type or maybe a subclass of some base list class completely external to Rdo. Maybe you want to manipulate a list or merge results from two different queries.

Custom Entities
Sometimes the default entity implementation does not serve you well. There’s a range of things you would want.
– You want to inherit from a base class to attach behavior to your data. So you attach an interface and a trait to that foreign class to make it Rdo aware.
– You want to implement your own behaviour altogether
– You want Rdo to implement a proper repository with strongly encapsulated, less chatty domain objects. Rdo should provide a mechanism to produce those objects for you rather than having you cast or wrap Rdo\Base objects into your actual models. But it should not force you to think about such concepts before you really need them.

NoSql backends
Remember the name Rampage Data Objects? Rdo is mostly about mapping data to objects. It’s not about autogenerating the smartest SQL for the most obscure use cases. But once you have your prototype version ready, your first feedback comes in, you think about new features – and suddenly you want to support a new backend for some of your domain objects. Be it a NoSql database, a key/value store, a limited scope within a directory like LDAP or even a REST service. In a traditional horde application, you would wrap Rdo into a Driver called Driver/Rdo or Driver/Sql and implement a different backend. But what if you do not want to flip all your data to the new backend? Only the shopping list should go to the nosql backend but not the customers or the product inventory? You end up implementing individual drivers with individual backends. But you used Rdo’s relations feature … things get messy.

To achieve these capabilities, I want to make the Mapper less dominant. The formerly optional Factory gets promoted to take care of managing the right mappers, entities, backends, list types. This is what these interfaces are for. Mappers should mostly take care of mapping between an object class and a plain data format. Currently the mapper and query do too much, tightly coupled with the single mandatory list type. This will change.

Rampage Data Objects provides out of the box defaults for easy and common use cases. It gets you started really quick. We will add the capabilities needed when your application is maturing and your use cases get more demanding. This will be fun.

Backward Compatibility Breaks
The Horde\Rdo\Base* classes and their return types will be your best bet for backward compatibility. If you don’t try to use entities and mappers for side effects, you will be very safe. Factory’s constructor will change very soon. Factory should best be created from a Dependency Injector. Mappers should be created from Factory.

You should not rely on mappers exposing adapter or factory for creating other objects. Also, trying to manipulate sql session modes or transactions through Rdo’s adapter is not a good idea.

bookmark_borderWhy extending PHPUnit might be wrong

Over the last few months, I spent a lot of cold winter evening hours looking into porting ancient PHPUnit 4.x test suites over to PHPUnit 9.x. The test suites, you guessed it, belong to the Horde framework. Horde actually does not just use phpunit but wraps it into its own testing library horde/test. The full details are explained on the wiki. Horde provides multiple ways to actually run the unit tests, either through its components helper or through AllTests.php or through calling individual tests with phpunit. While horde/test adds value and allows simplifying some test scenarios, it also has its own problems.

I’m not saying it didn’t make sense back when this was built. The original alpha release of horde/test dates back to 2011. PHP 5.4 was not yet released, composer would see its initial release in 2012, PEAR was slowly getting old and PEAR2 was not yet officially a dead cow. It was a totally different ecosystem back then. However, times have changed.

As of 2021, phpunit has evolved into a relatively fast moving target and while its primary author Sebastian Bergmann does not break backward compatibility for the sake of it, he is not shy of doing it either. Some parts of phpunit which are relevant for integration may become unavailable in the next major release. Some of PHPUnit’s core classes are clearly marked as internal.

The Horde Test library adds some mandatory boilerplate to phpunit: Each library and app has its own mandatory bootstrap.php file calling into the test library’s bootstrap class and may have an additional autoload file. Also, a test suite should come with an AllTests.php file calling into the test package’s AllTests class and in turn using Horde_Test_AllTests_TestRunner – which extends PHPUnit\Runner\BaseTestRunner but wait…

https://github.com/sebastianbergmann/phpunit/blob/9.5/src/Runner/BaseTestRunner.php

/**
 * @internal This class is not covered by the backward compatibility promise for PHPUnit
 */
abstract class BaseTestRunner

Trouble ahead, maybe. I am not joking. The master branch which will become PHPUnit 10 does not even have that file. We can find a solution for that, sure. But maybe we should not. As of 2021, the horde/test suite contains valuable helpers and extensions to PHPUnit, but none of these really need to hook into phpunit’s core that deep anymore.

Let’s look into different issues covered by that code.

Autoloading and dependency setup

Back in 2011 it made sense to have some glue which combines autoloading with Horde’s and PEAR’s notions of how things should be organized in a file system. That is no longer a core concern. Everybody and his dog use composer and composer’s autoloader and either do PSR-0 or PSR-4 autoloading schemes. Horde 6 will be delivered via composer, does – optionally, partially – break with some older ideas of organisation, brings some PSR-4 code and a lot of PSR-0 code. In short, there is very limited need for a custom autoloading scheme. The test suite should just rely on some autoloading being setup, by whoever or whatever. It should not address this concern beyond the means already provided by phpunit itself. Code should be accessed through checking if class names are either present or loadable, not by assuming some files are in some location. Of course, this is a little simplified – at its core, all autoloading depends on something being in some wellknown location. We should simply rely on the default solution until it is not possible for a specific case – and then address that. Even if we accept the notion of a vendor dir and an autoload file in vendor/autoload.php, we could provision it via horde/components or horde/git-tools or some horde/test utility for the rare cases where just using composer would not be appropriate.

Setting up complex test cases

Tests are supposed to be simple. They should focus on one unit under test and keep as much of the ecosystem as possible out of the picture. Dependencies should be stubbed or mocked. Databases and I/O, especially network traffic, should be substituted. A majority of test cases should be modeled with just the library’s own code and PHPUnit’s mocking facilities, maybe depending on some interfaces from some dependency.

But sometimes it’s not so easy. Sometimes the unit under test actually IS code interfacing with databases and their subtle differences. Sometimes the code needs to interact with a non-trivial amount of configuration and dependencies. Especially code which interacts with the framework’s core services or which couples the application’s subsystems may not be easy to mock. It makes sense to provide a simplified pretend environment for such test subjects. But that should be opt-in on a case by case basis. It should not be a mandatory tie-in of additional code and boilerplate for even the most trivial test cases.

Integration with tools

horde/components and some other tools wrap phpunit and other utilities. But as the ecosystem changes, the benefits are shrinking. Github Actions and Gitlab CI have become popular platforms with many ready-made CI tools available to use. It is no longer necessary to run your own Hudson or Jenkins and build your own automation frameworks. While it is still nice to have a short way to run a test suite including dead code and copy/paste detection, coding style fixers etc without the need to check anything into the SCM or even create a commit, there’s little incentive for maintaining a deep integration into your own runtime code. In the end, all testing and quality assurance automation aims to make you deliver safe, stable code as fast as possible. Spending ours writing automated tests makes sense and may safe you from hours of debugging and frustration. Spending hours writing or fixing a deep integration into some test tool which simply does not want to be integrated? Less so. It should be kept to a minimum. There’s a reason why the preferred delivery of some of these tools is not composer but phar.

What can you do?

So what would a new test toolchain look like? Unit tests and even integration tests should rely on a default, external autoloading solution. In the absense of configuration, this should be vendor/autoload.php, however it is delivered.
Unit tests should by default just run off PHPUnit and the library’s code. Maybe it makes sense to provide the most widespread interfaces without actually having to install their backing code. Why not, as long as it does not create manual effort. Any framework-specific recurring need should be addressed by opt-in code provided as traits or helper classes, available through default development-time autoloading sources. Fringe tests relying on specific infrastructure should skip without failing. Configurable tests should run out of the box in a useful default configuration if it makes sense. These changes can be created in an incremental, opt-in fashion with very limited BC breaks. This is good as nobody has time to waste on large scale transformation. Remember, it’s all about your code, the tests are just a useful tool.

bookmark_borderTurba Addressbook (II) – Architecture

Welcome back to our mini series on Turba.

Part I covered all the features and integrations provided by Turba.
Part II gives a dive into implementation, code structure etc.
Part III will consist of proposals for a changed architecture.

In the first chapter we looked at Turba’s features, APIs, Protocols. In the current installment, I want to present the concepts and structure of the code.

Turba is among the oldest horde applications. As such, it contains parts from various stages of Horde’s development.
Basically, it’s a layered architecture, but not fully fleshed-out or fully separated.

  • Presentation layer
  • Application Logic Layer
  • Storage/Backend layer

This is plugged together with some framework-provided integration points

  • with the sync services and inter-app API, RPC
  • with the portal/blocks service
  • with the Backup API
  • with the Content Tagger

Presentation Layer.

Turba provides both a desktop UI and a mobile UI. The following is mostly about the desktop UI. Blog de culturismo total fitness de lee hayward: debes probar estos entrenamientos musculares híbridos primobolan depot inicio – culturismo femenino y control de la natalidad, entrenamientos de culturismo femenino youtube – la casa de juegos del kama sutra negro.

Turba’s UI is organized into client pages rather than using a controller/route approach. This means, user visible URLs include files with a .php suffix.
The client pages build the UI, but are also API endpoints in a very traditional sense, catch interaction from forms or buttons as get variables and trigger actions in the backend
A representative example:

https://github.com/horde/turba/blob/master/search.php#L155
try {
$share = Turba::createShare(strval(new Horde_Support_Randomid()), $params);
$vid = $share->getName();
} catch (Horde_Share_Exception $e) {
$notification->push(sprintf(_("There was a problem creating the virtual address book: %s"), $e->getMessage()), 'horde.error');
Horde::url('search.php', true)->redirect();
}

Typically the top part of each client page is initializing the application, catching request and environment/session variables, setting up the business objects.
The middle part usually orchestrates actions depending on present or missing parameter scenarios.

The lower part will actually output the Horde Topbar, utility javascript, and the actual page content.

A rather extreme example is the data import/export part: https://github.com/horde/turba/blob/master/data.php
It contains mappings, attribute filtering, a longer cascade of if’s and switch statements and after about 360 lines, the actual UI logic starts.

This might sound messier than it really is. Actual functionality is mostly factored out into separate classes and the UI uses both some View classes and a form library.
Turba’s UI is heavy in forms and tables for the very reason that editing and displaying a highly configurable addressbook with hundreds of fields of data is
very crud-like by nature.

The Turba UI uses three types of helpers to compose the UI:

  • View classes
  • The Forms library
  • HTML templates with PHP snippets

View Classes

The View classes are very similar to the horde/view library and replicate some of its functionality, but are not using or inheriting from it.

Forms Library

Turba is a prominent user of the horde/forms library.
This utility allows to dynamically compose forms or multiple fields, check for internal/formal validity of entered values, missing mandatory values, etc.
It couples both a readonly and editable representation with a lot of processing logic.
Forms relies mostly on server-generated HTML with small parts of javascript injected for usability improvements.
The JavaScript snippets may utilize PrototypeJs and Scriptaculous, two formerly popular mainstream libraries.

Templates

The HTML templates provide most of the actual presentation apart from the forms – though they also provide some HTML forms to drive interaction.

Mobile View

Turba provides a read-only, touch friendly mobile phone presentation based on jQuery Mobile. This is completely separate from the rest of the turba UI.

Application Logic Layer and Backend Layer.

These two layers are closely tied together so it makes sense to discuss them as one.
From a problem domain perspective, it would make sense to expect these items:

  • Multiple addressbook sources or backends. These are individual configurations using drivers. Multiple sources on the same ldap driver can represent different directories or different views on one.
  • Addressbooks
  • Addressbook entries or contacts
  • Groups which are both entries and contain entries.

Turba’s logic layer consists of

  • Representations of actual addressbook entries via the Object and Object_Group classes.
  • A collection of reusable static functions in the Turba class.
  • Parts of the base backend driver.
  • Exporters/Importers for formats like LDIF and vCard.
  • specialised forms which deal both with presentation and state transformations.

Turba Objects and Groups are the common ground here. The objects glue together the actual data from the driver, files from VFS, permissions managed by the driver, the object’s change history and tags in the tagger app.
Groups are the only subtypes of Objects or Entries. Other subtypes suggested by the vcard standard like organisations or locations get no special treatment. Groups act as virtual addressbooks or views on addressbook data.
Turba’s groups only work with turba-accessible contacts. They cannot reference external contacts from other sources.

The addressbooks don’t really show up as entity objects. They are arrays of passive data managed by different parties.
This makes part of the logic a little hard to reason about and to setup unit tests.

A lot of Turba’s logic is data transformation. Backends have a native representation of data as SQL columns, LDAP objects, etc… as well as possibly native key names.
A person’s name may be a column “object_lastname” in one backend, but an attribute “givenName” in another one.

This is complicated by a highly configurable list of fields each backend can hold.
A driver transforms these native formats into a uniform format, transform date strings into date objects, handle blobs etc…
Both the driver dependent format and the “turba” format are hashes. It is up to the driver to actually generate a list object containing individual addressbook entry objects.

Another functionality delegated to the driver is deriving TimeObjects from addressbook data. TimeObjects are really not objects but hashes, representing anniversaries or birthdays.

Permission management in turba is using three distinct approaches:

  • The permission system allows restricting who can view or edit a certain addressbook
  • One addressbook source, usually the SQL db, can be configured as using the Shares system. In this case, the user can delegate access to addressbooks to other users or groups or make them world readable
  • The backing technology can restrict the user further. For example, the LDAP driver can be used to either bind using a service credential or using the user’s credential. Different users could have completely different data presented based on LDAP ACLs.

In the next article of this series, I am going to propose some modernisation approaches for Turba and discuss how they bring benefits in maintaining or extending the software vs being a tedious refactoring exercise.

bookmark_borderTurba Addressbook (I) – Features

I Turba Feature Overview

This will be the first part of a short series of articles exploring the Turba application and its architecture.
Part I covers all the features and integrations provided by Turba.
Part II will look into implementation, code structure etc.
Part III will consist of proposals for a changed architecture.

Turba is the addressbook application of the Horde Groupware Suite. It offers access to addressbook information via various means and protocols.

These addressbooks can be of quite different nature:

  • Readonly or Read/Write addressbooks stored in a backing system (LDAP, SQL databases, Kolab)
  • Virtual addressbooks derived from Horde Groups or from favourite addresses data out of the IMP mail program or any other provider implementing the appropriate API.
  • Addressbooks representing contact groups stored in addressbooks
  • There is also a pseudo backing store through the Preferences API, which in turn needs some backing store.

Depending on the type of addressbook and backing system, different features are available

  • browsing
  • searching for entries matching criteria
  • user-controlled sharing addressbooks with others
  • administrative reading or writing of data without the user being logged in

Administrators are allowed to customize the fields and presentation of the addressbooks presented to the user.
They may add custom fields relevant for their site like a student ID number or an ssh public key.
They may remove almost any default field if their LDAP does not expose it or if it’s simply unwanted.

While the primary work model used to be tp browse/search the addressbook or make the addressbook data available in the web interfaces of the calendar and webmail program,
modern users often consume the addressbooks by other methods.

  • Turba allows for syncing addressbooks via the CardDAV protocol into mobile devices or desktop clients like Mozilla Thunderbird.
  • Addressbooks can also be transmitted via Exchange ActiveSync (EAS) or used as a Global Address List (GAL). This allows limited integration with Microsoft Outlook.
  • An older protocol SyncML offers similar, but limited capabilities. It has fallen out of wide spread use since. Turba is still a valid SyncML Server.
  • Addressbooks and contacts are also served via WebDAV as folders and files.
  • Imports/exports to CSV/TSV and LDIF formats, vCard format and some proprietary formats.
  • Addressbook and Contact Reading/Writing is exposed via the Horde Inter-App API and – through this layer – via JSON-RPC and XML-RPC. The RPC layer also has a SOAP interface and an integration driver for phpGroupware / eGroupware though I have no idea if this works with current versions.

The next article in this series will give an overview of the current architecture.

bookmark_borderWhat’s new in Maintaina Horde: Status 3/2021

  • CalDAV and CardDAV now run off SabreDAV 4 rather than SabreDAV 2
  • We now support both the Composer installer versions 1 and 2.
  • Nothing still depends on the PEAR protocol.
  • The Horde Icalendar Library now supports vCard 4. Still, importing/exporting vCard 4 or using it in CardDAV in the addressbook App Turba is not yet done. This requires good test coverage, syncing is not something I’d like to break
  • PHPUnit Tests are being upgraded from PHPUnit 4 to PHPUnit 9.
  • All libraries and groupware-related libraries are now packaged as “alpha” versions from the FRAMEWORK_6_0 branch