Inpsyde Modularity Properties

If you are building a WordPress plugin, you typically need to reference specific values repeatedly—for example, the version or base path. The same is true for a theme or a library.

A pretty common approach is to define these as constants. There are boilerplates that suggest this approach.

If you have a "main" plugin class, you might have these "constants" as properties too.

This approach immediately introduces duplication because some of the values are typically also found in the Plugin Header, like the version. Hack together a release script and problem solved(?!).

The various Propreties classes of Inpsyde Modularity deal with this topic. They centralize the "constants" in one place, but it also conveniently sets them up.

Instead of defining typical constants one by one:

define('PLUGIN_NAME_VERSION', '1.0.0');
define('PLUGIN_NAME_PATH', __FILE__);

you can do this:

use Inpsyde\Modularity\Properties\PluginProperties;

$pluginPropreties = PluginProperties::new('/path/to/plugin-main-file.php');

and it will parse the plugin's header and set up the properties. You can then access the values by calling methods on the object:

$pluginPropreties->version(); // version of the plugin
$pluginPropreties->name(); // name of the plugin

If you are using Modularity for a theme, you have the ThemeProperties class:

$themeProperties = ThemeProperties::new('/path/to/theme-directory/');

For completeness, there are two more classes, the LibraryPropreties and the BasePropreties.

The BasePropreties doesn't parse any file; it expects you to pass some values manually to it. The LibraryProperties parses a composer.json file. More about these two somewhere else.

For more detailed information, check the documentation.

Next up the ...

Get an SSL certificate for the PHP Apache Docker image variant with Certbot

When prototyping or hacking on something with PHP, I use the PHP Docker image variant that includes Apache.

Even if it's a toy project, I get an SSL certificate for its domain when I put it online. Here's yet another step-by-step instruction on how to do it. It just focuses on the absolute minimal.

We are going to have a Dockerfile with the following:

FROM php:8.1-apache

RUN a2enmod ssl

CMD ["apache2-foreground"]

This is equal to the docker container run php:8.1-apache with the difference that it enables the ssl module. By default, Apache does not have that module enabled.

To build and tag the image:

docker image build -t app .

Let's follow the convention and put our main index.php file in the public folder.

We need two folders related to the process of getting the certificates. One for the actual certificates (certs), the one it's more like a temporary folder (data).

So far, we have this:

├── Dockerfile
├── public/index.php
├── letsencrypt/certs/
├── letsencrypt/data/

We also need to customize the default Apache configuration.

The configuration before the certificate will look different than the one after we have the certificates. Instead of modifying the contents of the config file, we can prepare them in advance:

├── apache2/000-no-ssl-default.conf
├── apache2/000-default.conf

Here's the content of the 000-no-ssl-default.conf:

<VirtualHost *:80>
    DocumentRoot /var/www/html/public

    Alias /.well-known/acme-challenge /var/www/letsencrypt/data/.well-known/acme-challenge

80 is the default port for HTTP. We set the root to the public instead of the default /var/www/html/ that Apache recommends.

The other part is creating an "alias", which is saying when the /.well-known/acme-challenge URL is loaded, then load whatever is located at the specified path.

With all this, we can start the container with the bind mounts:

docker container run \
-d \
-p 80:80 \
-v ${PWD}/public:/var/www/html/public \
-v ${PWD}/apache2/000-no-ssl-default.conf:/etc/apache2/sites-enabled/000-default.conf \
-v ${PWD}/letsencrypt/:/var/www/letsencrypt \

If we don't have a domain, we can get a subdomain free from FreeDNS and point it to the server's IP address.

The next step is getting a certificate issued by Let's Encrypt with Certbot.

Let's Encrypt is a free, automated, and open certificate authority (CA), run for the public's benefit.

Certbot is a free, open source software tool for automatically using Let's Encrypt certificates on manually-administrated websites to enable HTTPS.

docker container run \
-it \
--rm \
-v ${PWD}/letsencrypt/certs:/etc/letsencrypt \
-v ${PWD}/letsencrypt/data:/data/letsencrypt \
certbot/certbot certonly \
--webroot \
--webroot-path=/data/letsencrypt \
-d \

This command might look especially long and complicated, but until the certbot/certbot part it is just the most common Docker flags, and after that are specific flags for the tool.

Here are the relevant pages from the docs:

We bind mount the letsencrypt folders to both containers. The reason is that the Certbot has to have access to write to it, and the Apache has to have access to read from it.

We run it first with the --dry-run to ensure everything runs fine before issuing the certificates. Repeatedly asking for the certificate and running into problems when doing it will ban the domain for several days.

We will be asked to confirm a few things; we just need to follow the instructions. If all is good, we can rerun it without the --dry-run.

We should now see a bunch of files and folders in the /letsencrypt/certs/ folder.

Now that we have the certificates, we should use the 000-default.conf. After stopping the current container, we can start it again but this time with:

docker container run \
-d \
-p 80:80 \
-p 443:443 \
-v ${PWD}/public:/var/www/html/public \
-v ${PWD}/apache2/000-default.conf:/etc/apache2/sites-enabled/000-default.conf \
-v ${PWD}/letsencrypt/:/var/www/letsencrypt \

The content of the config can be this simple:

<VirtualHost *:80>
    Redirect /
<VirtualHost *:443>
    DocumentRoot /var/www/html/public

    SSLEngine on
    SSLCertificateFile "/var/www/letsencrypt/certs/live/"
    SSLCertificateKeyFile "/var/www/letsencrypt/certs/live/"

We leave the 80 accessible, but we redirect all requests to the HTTPS version. In the HTTPS section, we specified the paths for the certificates.

Laravel inspired package discovery for HTTP Fn

Usually, there's some wiring to do when you want to use packages with frameworks. Laravel manages to reduce the friction, in most cases, to a minimum with their package discovery feature.

When someone installs your package, you will typically want your service provider to be included in this list. Instead of requiring users to manually add your service provider to the list, you may define the provider in the extra section of your package's composer.json file.

Once your package has been configured for discovery, Laravel will automatically register its service providers and facades when it is installed, creating a convenient installation experience for your package's users.

This means that typically you don't have to do anything else but run composer require my-fav-package, and you are good to go.

I wanted the same DX for HTTP Fn.

How does it work

Because we don't have to think about it, it might appear magical compared to other frameworks where we have to take some additional steps. As long as a package developer follows the convention, it's no-config for us, the consumers.

The "package discovery" happens after a package is installed or removed with Composer. When something happens (events) in Composer, we can run some arbitrary code (command). Laravel is taking advantage of this.

Practically: after the installation or removal of a package, Laravel (1) loops over all the packages, (2) filters them down to the ones with the discoverable key, and (3) saves the list of "discovered packages". When Laravel is booted, (4) the stored list of discovered packages is merged with the explicitly defined packages, and then Laravel does what it does.

Here's how the "package discovering" looks like for HTTP Fn:



namespace HttpFn\App\Composer;

use Composer\Script\Event;

class GenerateAutoDiscoveredFnPackageProviderJsonFile
public const JSON_FILE_PATH = 'tmp/fn-package-providers.json';
public const EXTRA_NAMESPACE_KEY = 'http-fn';
public const EXTRA_FN_PROVIDER_KEY = 'fnProvider';

public static function run(Event $event): void
$providers = [];
$localPackages = $event->getComposer()->getRepositoryManager()->getLocalRepository()->getPackages();

if (empty($localPackages)) {

foreach ($localPackages as $package) {
$extra = $package->getExtra();
$provider = $extra[self::EXTRA_NAMESPACE_KEY][self::EXTRA_FN_PROVIDER_KEY] ?? false;

if (!$provider) {

$providers[] = $provider;

if (empty($providers)) {


In the composer.json this is registered as a command for the post-autoload-dump event:

"scripts": {
"post-autoload-dump": "HttpFn\\App\\Composer\\GenerateAutoDiscoveredFnPackageProviderClassNameJsonFile::run"

And here is how the "package registration" looks like:

function withAutoDiscoveredFnPackageProviders($fnPackageProviders = []): array
$jsonFile = '../' . GenerateAutoDiscoveredFnPackageProviderJsonFile::JSON_FILE_PATH;
$jsonData = file_exists($jsonFile) ? file_get_contents($jsonFile) : false;

if ($jsonData !== false) {
$autoDiscoveredFnPackageProviders = json_decode($jsonData, true) ?? [];

$fnPackageProviders = [

return $fnPackageProviders;

$fnPackageProviders = withAutoDiscoveredFnPackageProviders();

After the combined list of package providers, it's a matter of looping over them, checking if they are the right type, etc., and calling their boot method.

In Laravel, all this is a bit more complex, for good reasons. Check the PR introducing this feature if you are curious.

Custom RSS Bridge for Dense Discovery

Dense Discovery's RSS feed does not include the publication date (pubDate) and nor the issue's content.

I don't mind that they don't include the content. If they want me to visit their page, I can do that. However, my RSS reader sometimes lists the issues in almost random-looking order because of the missing publication date. Or at least, that's what I think is going on.

This annoyance seemed the perfect opportunity to create a custom bridge for RSS Bridge. So I did.

Getting the list of issues and grabbing their content was close to being fun.

On the archive page, all issues are listed in a select tag.

<select id="dynamic_select">
<option value="">Browse Archive</option>
<option value="">Issue #188</option>
<option value="">Issue #187</option>
<!-- ... -->

With the getSimpleHTMLDOMCached helper function requesting the page and extracting the data was straightforward. Under the hood, it uses a pretty old-school library called simple_html_dom that makes the DOM selection and manipulation easy.

private function issuesInfo(): array
$html = getSimpleHTMLDOMCached(self::ARCHIVE_URL);
$optionHtmlElements = array_slice($$html->find('#dynamic_select option'), 1);

$issuesInfo = [];

foreach ($optionHtmlElements as $htmlElement) {
$issuesInfo[] = [
'title' => $htmlElement->innertext,
'url' => $htmlElement->getAttribute('value'),

return $issuesInfo;

I mostly left untouched the content of the issues; I just removed the comments section and fixed the paths of the images. For the path fixing, I used the defaultLinkTo helper function.

private function issueHtmlContent(string $url): string
$html = getSimpleHTMLDOMCached($url);

$comments = $html->find('#comments', 0);

return (string)defaultLinkTo($html, $url);

Publication dates gave me a bit of a headache. The dates are nowhere mentioned, not even in a meta tag in the source code.

Since I can't extract the dates, and because this is a weekly newsletter and I know the date of the last (188) issue, I'm assigning them myself.

I'm setting the date of issue 187 one week earlier than the 188; for the issue 189, one week after 188, etc. It's probably not accurate, but it should be close, and it solves the ordering problem I had.

private function issueTimestamp(int $issueNr): int
$issueNrRelativeToBaseIssue = abs(self::ISSUE_NR_188 - $issueNr);
$dateRelativeToBaseIssueDate = new DateInterval("P{$issueNrRelativeToBaseIssue}W");

$baseIssueDate = new DateTimeImmutable(self::ISSUE_NR_188_DATE);

if (self::ISSUE_NR_188 < $issueNr) {
return $baseIssueDate->add($dateRelativeToBaseIssueDate)->getTimestamp();

return $baseIssueDate->sub($dateRelativeToBaseIssueDate)->getTimestamp();

Overall, I liked the developer experience. The common problems have a ready-made solution, and the documentation is helpful. It's something that I'll probably use in the future too.

HTTP Fn, an extensible, event-driven micro framework for random functions

I thought about using AWS Lambda or similar for the functions I need for my personal sites and other needs. But since I have a VPS where I self-host an RSS reader, a bookmark manager, etc., and since I'm the only one using these applications, the server is underutilized; sometimes, it just sits idle. Why not make use of it, right?

I looked a bit into self-hosted, open-source serverless options, but they all require an infrastructure complexity that I don't want to maintain. Having separate Docker containers for existing services is one thing, but managing containers with Kubernetes is entirely different.

If I put aside my tech fetishism, I could get away with separate PHP files for different things. On the other hand... I still want a bit more separation, a bit more structure.

In the end, I decided to make a lightweight framework-like application that takes care of bootstrapping extensions, modules, plugins, packages, whatever you want to call it.

The packages define their routes and decide how they want to handle the requests and how to respond. The package registration works similarly to the package discovery of Laravel, using the extra section on composer.json.

"name": "http-fn/foo",
"extra": {
"http-fn": {
"fnProvider": "HttpFn\FnPackage\Foo\Provider"

The packages are self-contained; they can be as simple as a callback function or can grow into multiple classes with tests.


namespace HttpFn\FnPackage\Foo;

class Provider implements \HttpFn\App\FnPackage\Provider
public function routeMethod(): RouteMethod
return RouteMethod::GET;

public function routePattern(): string
return '/foo';

public function handlerCallback(): callable
return function (RequestInterface $request, ResponseInterface $response): ResponseInterface {

return $response;

The main application requires these modules like any other Composer packages:

composer require http-fn/foo

So far, I'm happy with it!

Here's a super short demo, that took more to register than I imagined: