Get an SSL certificate for the PHP Apache Docker image variant with Certbot

When prototyping or hacking on something with PHP, I use the PHP Docker image variant that includes Apache.

Even if it's a toy project, I get an SSL certificate for its domain when I put it online. Here's yet another step-by-step instruction on how to do it. It just focuses on the absolute minimal.


We are going to have a Dockerfile with the following:

FROM php:8.1-apache

RUN a2enmod ssl

CMD ["apache2-foreground"]

This is equal to the docker container run php:8.1-apache with the difference that it enables the ssl module. By default, Apache does not have that module enabled.

To build and tag the image:

docker image build -t app .

Let's follow the convention and put our main index.php file in the public folder.

We need two folders related to the process of getting the certificates. One for the actual certificates (certs), the one it's more like a temporary folder (data).

So far, we have this:

.
├── Dockerfile
├── public/index.php
├── letsencrypt/certs/
├── letsencrypt/data/

We also need to customize the default Apache configuration.

The configuration before the certificate will look different than the one after we have the certificates. Instead of modifying the contents of the config file, we can prepare them in advance:

.
├── apache2/000-no-ssl-default.conf
├── apache2/000-default.conf

Here's the content of the 000-no-ssl-default.conf:

<VirtualHost *:80>
    DocumentRoot /var/www/html/public

    Alias /.well-known/acme-challenge /var/www/letsencrypt/data/.well-known/acme-challenge
</VirtualHost>

80 is the default port for HTTP. We set the root to the public instead of the default /var/www/html/ that Apache recommends.

The other part is creating an "alias", which is saying when the /.well-known/acme-challenge URL is loaded, then load whatever is located at the specified path.

With all this, we can start the container with the bind mounts:

docker container run \
-d \
-p 80:80 \
-v ${PWD}/public:/var/www/html/public \
-v ${PWD}/apache2/000-no-ssl-default.conf:/etc/apache2/sites-enabled/000-default.conf \
-v ${PWD}/letsencrypt/:/var/www/letsencrypt \
app

If we don't have a domain, we can get a subdomain free from FreeDNS and point it to the server's IP address.

The next step is getting a certificate issued by Let's Encrypt with Certbot.

Let's Encrypt is a free, automated, and open certificate authority (CA), run for the public's benefit.

Certbot is a free, open source software tool for automatically using Let's Encrypt certificates on manually-administrated websites to enable HTTPS.

docker container run \
-it \
--rm \
-v ${PWD}/letsencrypt/certs:/etc/letsencrypt \
-v ${PWD}/letsencrypt/data:/data/letsencrypt \
certbot/certbot certonly \
--webroot \
--webroot-path=/data/letsencrypt \
-d your-domain.com \
--dry-run

This command might look especially long and complicated, but until the certbot/certbot part it is just the most common Docker flags, and after that are specific flags for the tool.

Here are the relevant pages from the docs:

We bind mount the letsencrypt folders to both containers. The reason is that the Certbot has to have access to write to it, and the Apache has to have access to read from it.

We run it first with the --dry-run to ensure everything runs fine before issuing the certificates. Repeatedly asking for the certificate and running into problems when doing it will ban the domain for several days.

We will be asked to confirm a few things; we just need to follow the instructions. If all is good, we can rerun it without the --dry-run.

We should now see a bunch of files and folders in the /letsencrypt/certs/ folder.

Now that we have the certificates, we should use the 000-default.conf. After stopping the current container, we can start it again but this time with:

docker container run \
-d \
-p 80:80 \
-p 443:443 \
-v ${PWD}/public:/var/www/html/public \
-v ${PWD}/apache2/000-default.conf:/etc/apache2/sites-enabled/000-default.conf \
-v ${PWD}/letsencrypt/:/var/www/letsencrypt \
app

The content of the config can be this simple:

<VirtualHost *:80>
    Redirect / https://your-domain.com/
</VirtualHost>
    
<VirtualHost *:443>
    DocumentRoot /var/www/html/public

    SSLEngine on
    SSLCertificateFile "/var/www/letsencrypt/certs/live/your-domain.com/fullchain.pem"
    SSLCertificateKeyFile "/var/www/letsencrypt/certs/live/your-domain.com/privkey.pem"
</VirtualHost>

We leave the 80 accessible, but we redirect all requests to the HTTPS version. In the HTTPS section, we specified the paths for the certificates.

Laravel inspired package discovery for HTTP Fn

Usually, there's some wiring to do when you want to use packages with frameworks. Laravel manages to reduce the friction, in most cases, to a minimum with their package discovery feature.

When someone installs your package, you will typically want your service provider to be included in this list. Instead of requiring users to manually add your service provider to the list, you may define the provider in the extra section of your package's composer.json file.

Once your package has been configured for discovery, Laravel will automatically register its service providers and facades when it is installed, creating a convenient installation experience for your package's users.

This means that typically you don't have to do anything else but run composer require my-fav-package, and you are good to go.

I wanted the same DX for HTTP Fn.


How does it work

Because we don't have to think about it, it might appear magical compared to other frameworks where we have to take some additional steps. As long as a package developer follows the convention, it's no-config for us, the consumers.

The "package discovery" happens after a package is installed or removed with Composer. When something happens (events) in Composer, we can run some arbitrary code (command). Laravel is taking advantage of this.

Practically: after the installation or removal of a package, Laravel (1) loops over all the packages, (2) filters them down to the ones with the discoverable key, and (3) saves the list of "discovered packages". When Laravel is booted, (4) the stored list of discovered packages is merged with the explicitly defined packages, and then Laravel does what it does.

Here's how the "package discovering" looks like for HTTP Fn:

<?php

declare(strict_types=1);

namespace HttpFn\App\Composer;

use Composer\Script\Event;

class GenerateAutoDiscoveredFnPackageProviderJsonFile
{
public const JSON_FILE_PATH = 'tmp/fn-package-providers.json';
public const EXTRA_NAMESPACE_KEY = 'http-fn';
public const EXTRA_FN_PROVIDER_KEY = 'fnProvider';

public static function run(Event $event): void
{
$providers = [];
$localPackages = $event->getComposer()->getRepositoryManager()->getLocalRepository()->getPackages();

if (empty($localPackages)) {
return;
}

foreach ($localPackages as $package) {
$extra = $package->getExtra();
$provider = $extra[self::EXTRA_NAMESPACE_KEY][self::EXTRA_FN_PROVIDER_KEY] ?? false;

if (!$provider) {
continue;
}

$providers[] = $provider;
}

if (empty($providers)) {
return;
}

file_put_contents(
self::JSON_FILE_PATH,
json_encode(array_unique($providers))
);
}
}

In the composer.json this is registered as a command for the post-autoload-dump event:

{
"scripts": {
"post-autoload-dump": "HttpFn\\App\\Composer\\GenerateAutoDiscoveredFnPackageProviderClassNameJsonFile::run"
}
}

And here is how the "package registration" looks like:

function withAutoDiscoveredFnPackageProviders($fnPackageProviders = []): array
{
$jsonFile = '../' . GenerateAutoDiscoveredFnPackageProviderJsonFile::JSON_FILE_PATH;
$jsonData = file_exists($jsonFile) ? file_get_contents($jsonFile) : false;

if ($jsonData !== false) {
$autoDiscoveredFnPackageProviders = json_decode($jsonData, true) ?? [];

$fnPackageProviders = [
...$fnPackageProviders,
...$autoDiscoveredFnPackageProviders,
];
}

return $fnPackageProviders;
}

$fnPackageProviders = withAutoDiscoveredFnPackageProviders();

After the combined list of package providers, it's a matter of looping over them, checking if they are the right type, etc., and calling their boot method.


In Laravel, all this is a bit more complex, for good reasons. Check the PR introducing this feature if you are curious.

Custom RSS Bridge for Dense Discovery

Dense Discovery's RSS feed does not include the publication date (pubDate) and nor the issue's content.

I don't mind that they don't include the content. If they want me to visit their page, I can do that. However, my RSS reader sometimes lists the issues in almost random-looking order because of the missing publication date. Or at least, that's what I think is going on.

This annoyance seemed the perfect opportunity to create a custom bridge for RSS Bridge. So I did.


Getting the list of issues and grabbing their content was close to being fun.

On the archive page, all issues are listed in a select tag.

<select id="dynamic_select">
<option value="">Browse Archive</option>
<option value="https://www.densediscovery.com/archive/188/">Issue #188</option>
<option value="https://www.densediscovery.com/archive/187/">Issue #187</option>
<!-- ... -->
</select>

With the getSimpleHTMLDOMCached helper function requesting the page and extracting the data was straightforward. Under the hood, it uses a pretty old-school library called simple_html_dom that makes the DOM selection and manipulation easy.

private function issuesInfo(): array
{
$html = getSimpleHTMLDOMCached(self::ARCHIVE_URL);
$optionHtmlElements = array_slice($$html->find('#dynamic_select option'), 1);

$issuesInfo = [];

foreach ($optionHtmlElements as $htmlElement) {
$issuesInfo[] = [
'title' => $htmlElement->innertext,
'url' => $htmlElement->getAttribute('value'),
];
}

return $issuesInfo;
}

I mostly left untouched the content of the issues; I just removed the comments section and fixed the paths of the images. For the path fixing, I used the defaultLinkTo helper function.

private function issueHtmlContent(string $url): string
{
$html = getSimpleHTMLDOMCached($url);

$comments = $html->find('#comments', 0);
$comments->remove();

return (string)defaultLinkTo($html, $url);
}

Publication dates gave me a bit of a headache. The dates are nowhere mentioned, not even in a meta tag in the source code.

Since I can't extract the dates, and because this is a weekly newsletter and I know the date of the last (188) issue, I'm assigning them myself.

I'm setting the date of issue 187 one week earlier than the 188; for the issue 189, one week after 188, etc. It's probably not accurate, but it should be close, and it solves the ordering problem I had.

private function issueTimestamp(int $issueNr): int
{
$issueNrRelativeToBaseIssue = abs(self::ISSUE_NR_188 - $issueNr);
$dateRelativeToBaseIssueDate = new DateInterval("P{$issueNrRelativeToBaseIssue}W");

$baseIssueDate = new DateTimeImmutable(self::ISSUE_NR_188_DATE);

if (self::ISSUE_NR_188 < $issueNr) {
return $baseIssueDate->add($dateRelativeToBaseIssueDate)->getTimestamp();
}

return $baseIssueDate->sub($dateRelativeToBaseIssueDate)->getTimestamp();
}

Overall, I liked the developer experience. The common problems have a ready-made solution, and the documentation is helpful. It's something that I'll probably use in the future too.

HTTP Fn, an extensible, event-driven micro framework for random functions

I thought about using AWS Lambda or similar for the functions I need for my personal sites and other needs. But since I have a VPS where I self-host an RSS reader, a bookmark manager, etc., and since I'm the only one using these applications, the server is underutilized; sometimes, it just sits idle. Why not make use of it, right?

I looked a bit into self-hosted, open-source serverless options, but they all require an infrastructure complexity that I don't want to maintain. Having separate Docker containers for existing services is one thing, but managing containers with Kubernetes is entirely different.

If I put aside my tech fetishism, I could get away with separate PHP files for different things. On the other hand... I still want a bit more separation, a bit more structure.

In the end, I decided to make a lightweight framework-like application that takes care of bootstrapping extensions, modules, plugins, packages, whatever you want to call it.

The packages define their routes and decide how they want to handle the requests and how to respond. The package registration works similarly to the package discovery of Laravel, using the extra section on composer.json.

{
"name": "http-fn/foo",
"extra": {
"http-fn": {
"fnProvider": "HttpFn\FnPackage\Foo\Provider"
}
}
}

The packages are self-contained; they can be as simple as a callback function or can grow into multiple classes with tests.

<?php

namespace HttpFn\FnPackage\Foo;

class Provider implements \HttpFn\App\FnPackage\Provider
{
public function routeMethod(): RouteMethod
{
return RouteMethod::GET;
}

public function routePattern(): string
{
return '/foo';
}

public function handlerCallback(): callable
{
return function (RequestInterface $request, ResponseInterface $response): ResponseInterface {
$response->getBody()->write('foo');

return $response;
};
}
}

The main application requires these modules like any other Composer packages:

composer require http-fn/foo

So far, I'm happy with it!


Here's a super short demo, that took more to register than I imagined:

My WordPress environment for vetting Codeable experts

I have been vetting experts, reviewing both front-end and full-stack (plugin) submissions since 2019 at Codeable. The internal process changed throughout the years, but our commitment to give a fair review remained unchanged.

Part of providing a fair review is making sure we are testing the applicants' code in a "clean" environment. Clean, in this case, means barebones, an environment as typical as a WordPress installation can be.

To achieve this, I'm using wp-env to spin up a "brand new" WordPress installation before every review. As the wp-env reference page says, "it's simple to install and requires no configuration", as long as you have the prerequisites: Docker and Node.js installed.

The process

I start by cloning my boilerplate for the environment. This consists only of a few files:

.
├── .gitignore
├── .wp-env.json.example
├── package.json
├── phpcs
└── setup

Next, I make the initial setup script executable, then run it:

chmod +x ./setup
./setup

It's a simple script but it saves some repetition:

#!/bin/bash
cp .wp-env.json.example .wp-env.json
npm install
chmod +x ./phpcs

In my package.json, I don't have anything besides the wp-env dependency:

{
"devDependencies": {
"@wordpress/env": "^4.5.0"
}
}

One of our selected services for managing the application process is CodeScreen. It's a "developer assessment platform, allowing us to accelerate our hiring by screening developers fairly, quickly, and accurately."

CodeScreen makes the code submission available in a GitHub repo. For this reason, I can clone it in the environment's directory:

git clone git@github.com:codescreen/CodeScreen_xxx.git CodeScreen_xxx

Depending on the application type, I map the plugin or the theme to the WordPress instance. I do this by editing the .wp-env.json file:

{
"mappings": {
"wp-content/plugins/xxx": "./CodeScreen_xxx/wp-content/plugins/xxx"
}
}

At this point, everything is set up, and I can start the local WordPress environment by:

npx wp-env start

We require candidates to follow a coding standard. If they don't explicitly state which one they follow, we assume they are using the WordPress Coding Standard (WPCS). To check the results, I use my helper script:

./phpcs ./CodeScreen_xxx/wp-content/plugins/xxx

This runs the PHPCS with the preinstalled and configured WPCS using a Docker image. This is just a "shortcut script" for:

#!/bin/bash
CMD=$*

if [[ -z ${CMD} ]]; then
echo "No path provided for the PHPCS. Do it?"
exit 1
fi

docker run -it --rm -v $(pwd):/app willhallonline/wordpress-phpcs:alpine phpcs "$CMD"

When I'm done with the review, I stop and destroy the local environment with:

npx wp-env stop
npx wp-env destroy