Some of my short to medium-format posts.

Predominantly a combination of sort of tutorials and shout-outs to resources I like.

Seldom something else.

Facades in Laravel have a specific meaning and should not be confused with the facade pattern. According to the Laravel documentation:

Facades provide a "static" interface to classes that are available in the application's service container.

If this is confusing, it becomes clear when you understand the problem facades solve and how they do it.

The problem facades solve

Let's consider a scenario where we have an SVG Loader interface and its implementation with multiple dependencies.

namespace Svg;
 
interface Loader {
public function byName(string $name): string;
}
 
class LocalLoader implements Loader {
// ...
}
$svgLoader = new Svg\LocalLoader(
__DIR__ . '/assets/svg',
new Svg\Normalizer\Bundle(
new Svg\Normalizer\WhitespaceNormalizer(),
new Svg\Normalizer\SizeAttributeNormalizer(),
// ...
)
);
 
$svgLoader->byName('mastodon');

We don't instantiate this and other objects repeatedly, but we have a PSR-11 container that takes care of it.

We could use a container that supports auto-wiring or configure it. PSR-11 doesn't define how you add things to the container.

Wherever we want to use the SVG Loader, we inject it as a dependency:

readonly class HtmlComponent {
public function __construct(
private Svg\Loader $svgLoader,
private Template\Renderer $templateRenderer
) {
}
 
// ...
}

This approach is sound and generally praised because making dependencies explicit is extremely important.

There are other considerations than making dependencies crystal clear, and sometimes, for reasons, a more concise syntax is prefered:

Facade\Svg::byName('mastodon');

That's what Laravel-like facades "solve". They provide a way to do this. From the docs again:

Laravel facades serve as "static proxies" to underlying classes in the service container, providing the benefit of a terse, expressive syntax ...

An MVP implementation

We don't need Laravel to have facades; a basic implementation is surprisingly simple.

If we would want to provide a facade for exactly one class and one method, we could do this:

namespace Facade
 
class Svg {
public static function byName(string $name): string {
// container() -> \Psr\Container\ContainerInterface
$svgLoader = container()->get(
// We used the FQN as the ID for the container
Svg\Loader::class
);
 
return $svgLoader->byName($name);
}
}

We need a facade class with the same method name as the proxied, but this time static.

(If we have multiple methods, then it's quite clear we will end up with quite a lot of duplication and maintenance overhead.)

The service locator pattern

When we are calling container(), we are accessing the container that implements ContainerInterface.

It doesn't have to be a function; it can be done in many ways, arguably some worse than others:

$instance = container()->get($id);
// or
$instance = Container::services()->get($id);
// or
global $container;
 
$instance = $container->get($id);
// or other way

This is the part that is controversial and why some dislike facades.

Ultimately, we are providing another "syntax" to grab objects from the application container from anywhere, anytime.

This is the service locator pattern hidden from plain sight.

The Laravel documentation provides some warnings and thoughtful advice:

However, some care must be taken when using facades. The primary danger of facades is class "scope creep". Since facades are so easy to use and do not require injection, it can be easy to let your classes continue to grow and use many facades in a single class.

A more flexible approach

Typically, we will want facades for multiple classes with various methods.

Here's a possible implementation of a more flexible solution:

namespace Facade;
 
abstract class Facade
{
abstract protected static function proxiedId(): string;
 
public static function __callStatic(string $name, array $arguments)
{
$instance = container()->get(
static::proxiedId()
);
 
return $instance->$name(...$arguments);
}
}
 
/**
* @method static string byName(string $name)
*
* @see Svg\Loader
*/
class Svg extends Facade
{
protected static function proxiedId(): string
{
// We used the FQN as the ID for the container
return Svg\Loader::class;
}
}

Compared to what we had, we replaced the "duplicated" method(s) with the magic method.

The consequence of this is losing the autocomplete in the IDEs. To overcome this, we added clues using DocBlocks.

Some duplication is necessary, but considering all, DocBlocks provides fewer headaches - and it's not even required; it's a convenience.

With the introduction of the Facade\Facade base class, we simplified things even more while allowing us to provide helper methods for all facades.

Last words

Of course, Laravel's implementation is far more complex; it provides performance optimizations, has error checking, etc., which you should do in an actual project. But at the end of the day, this is the core of it.

Testability

Interestingly, by far, most of the code in Laravel's Facade class is about providing a way to test them.

All facades have methods like expects, shouldReceive, spy, which might sound familiar because they are from Mockery.

Whenever you call a method like expects(), that is "proxied" to Mockery, which allows you to set up expectations "as usual".

namespace Illuminate\Support\Facades;
 
Svg extends Facade {
// ...
}
 
Svg::shouldReceive('byName')
->with('mastodon')
->andReturn('<svg>...</svg>');

Those who argue that facades make the code hard to test or even untestable in Laravel might not know this.


How to replicate this behavior or how to test the MVP implementation is for another article, but it's definitely possible.

Frameworks like Alpine and Stimulus continue to enjoy a widespread appeal. I'm a big fan too.

Their declarative and component-based approach to UI, which is also characteristic of React, Vue.js, Svelte, etc., is enjoyable and offers easy-to-understand patterns.

<div x-data="{ count: 0 }">
<button x-on:click="count++">Increment</button>
 
<span x-text="count"></span>
</div>

However, there are instances where the HTML cannot be "decorated" with attributes of this kind. Or perhaps you dislike "polluting" the DOM or prefer less magic and to stay closer to barebones JavaScript.

If that's the case, but you still want state-driven reactive UIs, using Signals at the core of your components might be the solution.

Signals

@preact/signals-core is a good choice for state management and to be the driving force behind the UI and DOM updates. Using the signal and effect functions provides you with the low-level API necessary for creating reactive components.

import { signal, effect } from '@preact/signals-core';
 
const name = signal('Jane');
 
// Logs name every time it changes:
effect(() => console.log(name.value));
// Logs: "Jane"
 
// Updating `name` triggers the effect again:
name.value = 'John';
// Logs: "John"

This simplicity is deceptively powerful.

If you are familiar with any of the previously mentioned frameworks, the following examples will feel familiar.

Compared to Alpine

The chosen components are from the Alpine getting started page, where they build the same UI elements "their way". This way, you can compare the two easily.

Without further due, here's the counter:

<div class="counter">
<button type="button">Increment</button>
<span></span>
</div>
function counter(rootElement) {
const count = signal(0);
 
const displayCount = () => {
rootElement.querySelector('span').innerHTML = count.value;
};
 
const handleIncrementCount = () => {
count.value = count.value + 1;
};
 
const init = () => {
effect(displayCount);
 
rootElement
.querySelector('button')
.addEventListener('click', handleIncrementCount);
};
 
return {
init,
};
}
 
counter(document.querySelector('.counter')).init();

Because we used a signal in the displayCount, all we had to do is to "wrap" the displayCount in an effect.

As the documentation says:

To run arbitrary code in response to signal changes, we can use effect(fn). [...], effects track which signals are accessed and re-run their callback when those signals change.

You are not alone if this reminds you of some sorts of proxy state. It's a bit like that but technically very different.

You can also play with it on CodePen.

This would be the dropdown:

<div class="dropdown">
<button type="button">Toggle</button>
<div>Contents...</div>
</div>
function dropdown(rootElement) {
const open = signal(false);
 
const displayContent = () => {
rootElement.querySelector('div').style.display = open.value
? ''
: 'none';
};
 
const handleClickOutside = (event) => {
if (rootElement.contains(event.target)) {
return;
}
 
open.value = false;
};
 
const handleToggleOpen = () => {
open.value = !open.value;
};
 
const init = () => {
effect(displayContent);
 
rootElement
.querySelector('button')
.addEventListener('click', handleToggleOpen);
 
document.addEventListener('click', handleClickOutside);
};
 
return {
init,
};
}

Last but not least, the search input:

<div class="search">
<input type="search" placeholder="Search...">
<ul>
</ul>
</div>
function search(rootElement) {
const items = ['foo', 'bar', 'baz'];
const search = signal('');
const matchedItems = computed(() =>
items.filter((item) => item.startsWith(search.value)),
);
 
const displayResults = () => {
rootElement.querySelector('ul').innerHTML = matchedItems.value
.map((item) => `<li>${item}</li>`)
.join('');
};
 
const handleQueryChange = (event) => {
search.value = event.target.value;
};
 
const init = () => {
effect(displayResults);
 
rootElement
.querySelector('input')
.addEventListener('keyup', handleQueryChange);
};
 
return {
init,
};
}

Of course, these are all naive implementations and are not handling cases where things could go wrong, but they should demonstrate how things could be structured and glued together.


For more complex situations, you can consider @deepsignal which extends Signals. It allows the state to be written in the following way:

import { deepSignal } from '@deepsignal/preact';
 
const userStore = deepSignal({
name: {
first: 'Thor',
last: 'Odinson',
},
email: 'thor@avengers.org',
});

The WordPress Interactivity API builds both on Signals and DeepSignal (and Preact).

It's the first time WordPress has tried to offer some standardization JavaScript used on the front-end.

Those who have plugins in the WordPress Plugin Directory and rely on manual testing face a challenge in avoiding the plugins being marked as "out of date".

When this occurs, a notice is displayed:

This plugin hasn't been tested with the latest 3 major releases of WordPress. It may no longer be maintained or supported and may have compatibility issues when used with more recent versions of WordPress.

This is often interpreted as a sign that the plugin has been "abandoned."

A lot has been said about "abandoned projects" in the open-source world, including the WordPress space. Various ideas have been proposed to address this issue, from promoting plugin adoption to creating maintenance programs.

Are we in a better situation than we were ten years ago?

One thing is for sure: there's no way around testing. The only way to know something is still working as expected, as advertised, is by somehow checking it.

Manual testing over time is draining, unsustainable, and definitely not scalable. And I would go as far as to say the reason for "abandonment" in some cases.

The best chance: E2E tests

Among the many types of automated tests, end-to-end (E2E) can be deployed the easiest. It is the least invasive and does not require any upfront code changes.

Since most of the plugins were released, E2E testing has become easier and more accessible. And we are close to some reliable no-code, "AI" assisted E2E tools and solutions.

But until then, the most future-proof solution is to use Playwright. This is the tool that WordPress core plans to migrate to, and it is already being used by Gutenberg.

Maintaining the tests for others

While updating a small plugin, I added some E2E tests. However, they were not merged because the maintainer did not want to handle the complexity of the tool.

This decision was completely fair, considering the size and usage of the plugin, among other factors.

Nevertheless, the E2E tests themselves are not the problem; it's the maintainability of the tool. That's why I decided not to discard the tests and took on the responsibility of maintaining the Playwright setup myself.

My plan is to run the tests regularly and notify the developer if anything breaks. Likewise, maybe give them a ping that everything is smooth. Finger crossed.

Who knows, I might add tests for other plugins over time. There are a few plugins that I would like to see around in the next few years.

If nothing else, when the time eventually comes, these tests can provide a solid foundation for forking the project, refactoring it, and adding new features.

There's a plugin called Comment Saver, developed by Will Norris and released in 2008. Until a few weeks ago, it "officially" only supported WordPress 2.8.

I stumbled upon it accidentally while exploring someone's WordPress installation, and to my surprise, the plugin still functioned, more or less.

By "more or less," I mean it worked when the debug mode was turned off. Enabling debug mode caused it to throw some warnings and deprecation notices, breaking comment submission because ...

Never mind, because it's no longer the case, as the issue has been fixed. Besides this, the dependency on jQuery was dropped. Now it can be used without adding extra fluff.

The plugin is ready to last another 15 years. Or not.

While I made these small changes, Will put as much effort into it. He dug up the repo and made it available on GitHub. Reviewed the submitted code and added some automation for the release. All this even though he is no longer connected to the WordPress space.

It was understandable that he wanted to avoid the complexity of maintaining E2E tests.

This got me thinking, and it gave me an idea.

There is the ... operator in PHP (and other languages) known as the splat operator, which, when combined with type hinting, proves to be extremely useful for accepting any number of "things."

However, what if you want to ensure at least one "thing"?

If you do the checking "manually", that's totally fine. Depending on your use case, you can deal with no "things" or require it.

class Foo
{
private array $conditions = [];
public function __construct(Condition ...$conditions)
{
// Let's make damn sure there's a condition!
assert(!empty($conditions));
$this->conditions = $conditions;
}
public function __invoke(): array
{
// Or be less strict about it and handle it as a possible state
if (empty($this->conditions)) {
return [];
}
// ...
}
}

The alternative solution looks like this:

class Foo
{
private array $conditions = [];
public function __construct(Condition $firstCondition, Condition ...$conditions)
{
$this->conditions = [$firstCondition, ...$conditions];
}
public function __invoke(): array
{
// No checks needed
// ...
}
}

I like the expressivity of this solution, and it's also more compact.

Since the splat operator has been around for ages, I can even imagine that this "pattern", "trick" is called somehow.

Now, it's just a matter of remembering and applying. That's the "easy" part.

Higher-order functions and higher-order (React) components are well-known concepts, but the term "higher-order block" might not be so familiar.

In this article, a higher-order block refers to a parent block that wraps another block, modifying its nested block's functionality in a meaningful way, beyond just visual changes.

Examples used in this article

Let's use a Cards Grid and an Orderby Condition (higher-order) block as examples.

The Cards Grid block, when inserted, lists six posts from newest to oldest in a 3x2 grid.

The Orderby Condition block renders its inner blocks and provides a control for changing the ordering (title, date, etc.).

When the Cards Grid is a child of the Orderby Condition block, it lists the articles based on the selected option rather than the default.

The purpose of higher-order blocks

Certainly, the described Cards Grid could simply include the control for the ordering, but let's not dwell on the chosen example.

It's easy to imagine how the controls of a block could exponentially increase if new features are introduced. To avoid confusing users, controls might be displayed conditionally, resulting in complex logic. Logic that is hard to maintain.

Higher-order blocks aim to solve this problem by encapsulating the functionality that changes a block. Depending on the higher-order block, the functionality of the child block could vary.

This solution, in our example, allows the Cards Grid to remain relatively untouched by changes, while providing the freedom to introduce new higher-order blocks (new features) and phase out others that are no longer needed.

The underlying technical solution

The Block Editor (Gutenberg) already provides a way for sharing data between ancestor and descendant blocks.

Block context is a feature which enables ancestor blocks to provide values which can be consumed by descendent blocks within its own hierarchy.

The documentation goes on to say:

Those descendent blocks can inherit these values without resorting to hard-coded values and without an explicit awareness of the block which provides those values.

With the Block Context, we can store the selected value of the orderby at the parent level (Orderby Condition block) and pass it down, making it available to the child (Cards Grid).

There's one caveat, though: blocks using server-side rendering do not have access to the context. This might change in the future; there's a ticket about this.

The gist of the implementation

The important and relevant parts for the Orderby Condition block.

{
"name": "acme/orderby-condition",
"attributes": {
"orderby": {
"type": "string"
}
},
"providesContext": {
"acme/hob-orderby": "orderby"
},
}
const Edit = (props) => {
// ...
const blockProps = useBlockProps();
const { children } = useInnerBlocksProps(blockProps, {
allowedBlocks: ALLOWED_BLOCKS,
});
 
return (
<>
// ...
<div {...blockProps}>{children}</div>
</>
);
};
 
const Save = () => <InnerBlocks.Content />;

Let's use a server-side rendered block for Cards Grid and find a solution for the "missing context".

{
"name": "acme/card-grid",
"usesContext": ["acme/hob-orderby"]
}
const Edit = (props) => {
// ...
 
return (
<>
// ..
<ServerSideRender
block={props.name}
attributes={props.attributes}
/>
</>
);
};

Overcoming the missing context

context is not a valid prop of the ServerSideRender component, so we can't pass it.

If we want to pass the consumed context as an "extra" attribute, it will be removed. Unknown, undefined attributes are discarded "thanks" to __experimentalSanitizeBlockAttributes.

<ServerSideRender
block={props.name}
attributes={{ orderby: 'this-is-removed-as-it-is-not-registered' }}
/>

Adding the higher-order attributes manually to the inner block(s) would be unfortunate, as we would increase the coupling between the blocks more than necessary.

With the introduction of more blocks and more controls, things would become bloated; we would have to add more attributes in even more places. That was what we wanted to avoid in the first place.

The good news is, we already have a connection between the blocks: the context. We can use that information to "fill in the gaps".

Using a filter to do that

Using the block_type_metadata filter, a bit of strictness (and hackiness), we could make the attributes available dynamically.

add_filter(
'block_type_metadata',
static function (array $metadata): array {
// Don't make changes to blocks that are not ours
if (!str_starts_with($metadata['name'], 'acme/')) {
return $metadata;
}
 
// We only care about applying higher-order block attributes
$hobContexts = array_filter(
$metadata['usesContext'] ?? [],
fn(string $contextKey) => str_starts_with($contextKey, 'acme/hob-')
);
 
foreach ($hobContexts as $contextKey) {
$attribute = str_replace('acme/hob-', '', $contextKey);
 
$metadata['attributes'][$attribute] = [];
}
 
return $metadata;
}
);

The convention here is the following: context keys prefixed with hob- are turned into "fake" attributes.

// The input
$metadata = [
'usesContext' => 'acme/hob-orderby',
];
 
// The output
$metadata = [
'usesContext' => 'acme/hob-orderby',
'attributes' => [
'orderby' => [],
],
];

Thanks to this, the attributes won't be stripped if they are passed down:

const Edit = (props) => {
// ...
 
return (
<>
// ..
<ServerSideRender
block={props.name}
attributes={{
...props.attributes,
// We can create a helper to make this mapping similar to the PHP
orderby: props.context['acme/hob-orderby'],
}}
/>
</>
);
};

Then inside the render_callback, we can get access to it:

register_block_type_from_metadata(
'block.json',
[
'render_callback' => function(array $attributes, string $content, WP_Block $block): string {
// The args building can be extracted to a dedicated builder class when using OOP
$posts = get_posts([
'orderby' => $attributes['orderby'] ?? $block->context['acme/hob-orderby'] ?? 'date',
]);
 
// ...
return $output;
}
]
)

As a summary

With this approach, we can maintain a separation between the higher-order block and the inner block while still allowing the higher-order block to modify the inner block's behavior.

This approach might ensure a more modular and maintainable structure for our blocks, which is increasingly important as new features are added.

If you haven't read the A bespoke PHP SSG, post entity creation from Markdown files, do so.

TL;DR: by extracting the logic that determines which factory to use, we ended up with a class like this:

class PostTypeFactoryPicker
{
public function pick(MdFile $inputFile): PostTypeFactory
{
}
}

The "problem" with it

Attempting to test or even just imagining how to test a class reveals the untestability of the code. This is not necessarily due to the code being "bad"; rather, it lacks the "right" structure.

We need to organize the code in a specific way to facilitate testing. Consider the difference it makes when you are creating classes within other classes or if you pass them as a dependency.

Extensibility is similar.

If we don't attempt to extend or at least envision how someone else could extend our code, we never grasp certain aspects of it.

Imagine that

To introduce another post-type factory, what steps must we take? Creating the factory is a given. But as a second step, we need to add some logic to the PostTypeFactoryPicker.

This means even if someone passed in a new factory to our dependency container, they would have a hard time adding that logic. They can't simply edit the pick method.

They could create a decorator for the PostTypeFactoryPicker, but that seems a lot of trouble.

We can solve this by altering the design. By not containing the determination logic in the PostTypeFactoryPicker class and keeping it in factories, the problem disappears.

The possible solution

A few good names exist for such a method, from supports, canCreate, to isValidFor.

As a first step, we can add to our existing interface:

interface PostTypeFactory
{
public function isValidFor(MdFile $inputFile): bool
public function create(MdFile $inputFile): Post
}

Then, the PostFactory can be modified as follows, and the picker class can be removed:

class PostFactory implements PostTypeFactory
{
public function create(MdFile $inputFile): Post
{
foreach($factories as $factory) {
if ($factory->isValidFor($inputFile)) {
return $factory->create($inputFile);
}
}
return new NullPost();
}
 
public function isValidFor(MdFile $inputFile): bool
{
// ???
}
}

One more change

We have to add the isValidFor to the PostFactory, which doesn't make sense.

We could return true and call it a day, even throw an exception saying it's not implemented.

Alternatively, we can solve this problem by creating more granular interfaces:

interface PostTypeFactoryValidityChecker {
public function isValidFor(MdFile $inputFile): bool
}
 
interface PostTypeFactory
{
public function create(MdFile $inputFile): Post
}

With this change, we only have to implement what we need:

class ArticleFactory implements PostTypeFactory, PostTypeFactoryValidityChecker
{
}
 
class PostFactory implements PostTypeFactory
{
}

As a conclusion

Now, we are in a position where all we have to do is pass in the factory after creating it. There's nothing else we have to change in our code.

This even makes certain people with specific ideas about a particular topic smile.

While the advantages may not be significant for a solo pet project, nurturing good habits and reflexes for situations where they will be necessary is not fruitless.

I have built a custom SSG for this site. While doing so, I have explored different (code) designs for it.

The plan with this series is to cover specific parts of the system, show some alternative options, and explain the rationale behind certain decisions.

While the needs are based on my requirements, there could be more general takeaways. Even if building a general-purpose static site generator (SSG) and a bespoke one are different endeavors.

Some background

At one point, I distinguished between two types of posts on this site: articles and pulses. The distinction between the two has become blurred over time, but that's a different story.

For the articles, I use the Markdown file's name as the title, for example, Alpine.js directives and WordPress sanitization.md.

In addition to the content, I also include some meta information: date, tags, etc.

---
date: 2020-09-01
tags: wordpress, alpinejs
---
 
WordPress has functions with sensible defaults for when you want to filter untrusted HTML ...

The pulses do not contain any meta information. The file's name also includes the date, for example, 202203281713 Introducing the Pulse.md.

While I'm sure the reasons behind these decisions are intriguing for everyone, with great effort, I'll refrain from discussing them since they are not necessary to understand the rest of the article.

One quick note: not all "pages" are generated from Markdown files; some data is read from a JSON file.

A possible solution

Due to the two post types and their differences, some complexity arises because they must be handled differently.

Factory or Factories for the Post Types

To keep things separate, we can create distinct factories for the post types: PulseFactory, ArticleFactory.

Having one factory with multiple creation methods is an alternative option, but likely the factories will have different dependencies. For example, the PulseFactory does not need any kind of front-matter parsing for the meta.

This separation is "nice", but it's somewhat inconvenient to call the appropriate factories explicitly. It would be more convenient to pass a Markdown file to a factory and receive something in return.

That something could be a Post. So we can create a PostFactory. Having separate Article, Pulse entities might also make sense.

class PulseFactory
{
public function create(MdFile $inputFile): Post
{
}
}
 
class ArticleFactory
{
public function create(MdFile $inputFile): Post
{
}
}
 
class PostFactory
{
public function create(MdFile $inputFile): Post
{
}
}

To "connect" all these factories, we can introduce a PostTypeFactory:

interface PostTypeFactory
{
public function create(MdFile $inputFile): Post
}

The picker class

How to determine which factory to call is important, but the crucial question is which class does the determining.

Although it's perfectly acceptable to have that logic inside the PostFactory:

class PostFactory implements PostTypeFactory
{
public function create(MdFile $inputFile): Post
{
$concreteFactory = match($inputFile)
{
// or if statements, private methods ...
}
return $concreteFactory->create($inputFile);
}
}

We can introduce a "picker" class, PostTypeFactoryPicker, responsible for determining the correct factory based on the Markdown file. We can also call it a Resolver, or Determiner.

Bringing It All Together

In the end, we ended up with something like this:

class PostTypeFactoryPicker
{
public function pick(MdFile $inputFile): PostTypeFactory
{
}
}
 
readonly class PostFactory implements PostTypeFactory
{
public function __construct(
private PostTypeFactoryPicker $postTypeFactoryPicker
) {
}
public function create(MdFile $inputFile): Post
{
$concreteFactory = $postTypeFactoryPicker->pick($inputFile);
return $concreteFactory->create($inputFile);
}
}

Somewhere in the system, this will be executed. Of course, done with the use of a dependency injection container:

$post = (new PostFactory(
new PostTypeFactoryPicker()
))->create($file);

As a conclusion

There's nothing groundbreaking here, and this resembles some of the well-known patterns.

The fact that the creational requirements are not intermingled is a positive trait. Even that facade-like factory (PostFactory) that doesn't do much has its merits.

The picker class, it's questionable. But having the determination logic in one place is undoubtedly good.

But there are other ways to do it. But about that, in another article.

We all know how indexes work. The first item is 0, the second item is 1; CS49-topic;

So really, there's no doubt about what this code means:

$blocks[0]['name'] === 'acme/image';

Also, no doubt about what this one means:

$blocks[1]['name'] === 'acme/image';

Let's suppose that's really the gist of the "code": checking if the "name" of the first or second "block" matches "acme/image".

I think having a conditional like this is okay:

if ($blocks[0]['name'] === 'acme/image') {
return 'something';
}
 
if ($blocks[1]['name'] === 'acme/image') {
return 'something else';
}

It just gets the job done.

Extracting to a variable or constant the "block type" would be a slight improvement:

const IMAGE_BLOCK = 'acme/image';
 
if ($blocks[0]['name'] === IMAGE_BLOCK) {
return 'something';
}
 
if ($blocks[1]['name'] === IMAGE_BLOCK) {
return 'something else';
}

Capturing and hiding away the "block structure" is an idea to explore:

function blockNameMatches(array $block, string $name): bool
{
return $block['name'] === $name;
}
 
if (blockNameMatches($blocks[0], IMAGE_BLOCK)) {
return 'something';
}

Building on top of these low-level functions could bring some clarity and specificity:

function isBlockAnImageBlock(array $block): bool
{
return blockNameMatches($block, IMAGE_BLOCK);
}
 
if (isBlockAnImageBlock($blocks[0])) {
return 'something';
}

However weird it might seem at first sight, using constants in place of those indexes makes things more readable to me:

const FIRST = 0;
const SECOND = 1;
 
if (isBlockAnImageBlock($blocks[FIRST])) {
return 'something';
}
 
if (isBlockAnImageBlock($blocks[SECOND])) {
return 'something else';
}

But somehow, for big numbers, like FIVE_HUNDRED_AND_TWENTY_TWO, it no longer works.

And, of course, there's no limit; we can go even further:

function isFirstBlockAnImageBlock(array $blocks): bool
{
return isBlockAnImageBlock($blocks[FIRST]);
}
 
if (isFirstBlockAnImageBlock($blocks)) {
return 'something';
}
 
if (isSecondBlockAnImageBlock($blocks)) {
return 'something else';
}

Is it better now or worse? When was good enough?

I like that there are no straightforward answers to these questions.

WordPress.com and WordPress VIP hosting offers an image transformation API.

Although their exact implementation remains unknown to the public, they use a customized version of Photon, which is open-source.

If you favor a prescriptive syntax over a descriptive syntax for your responsive images, then Photon or a comparable solution is indispensable or, at the very least, extremely useful.

At present, the selected hosting for "my" client, WordPress VIP, does not provide a local development environment with the image transformation capabilities.

VIP File System, Cron control, and Page cache services are not built-in. When developing features for an application that relies on these services, it is strongly recommended to stage changes and perform tests on a non-production VIP Platform environment.

Not being able to test services locally but only on the hosting environment is not the most efficient approach for rapid delivery.

Consequently, I decided to try to set up Photon locally on my own.

Setting up Photon locally

While most code from WordPress.org is open-source, the infrastructure and configuration details aren't publicly available. We can only make educated guesses about the PHP extensions, libraries, and so on, installed on their server.

Dockerized Photon: the starting point

Chris Zarate shared a Dockerized Photon, which served as an excellent starting point.

Lacking any official documentation regarding Photon's requirements, this provided the technology stack needed to run it. At least, the stack required six years ago, which is the date of the last commit in the repository.

I didn't expect it to work out of the box, and indeed, it didn't. Some of the libraries were no longer accessible. However, since most of the requirements are similar to a more involved LAMP stack, it was easy to find newer versions or alternatives to the requirements.

Photon's configuration

Photon's main entry file expects a config.php file, which is not made available - for good reasons, presumably.

This forced me to scan the code and try to decipher some of the logic, as things weren't functioning even after setting up the infrastructure.

Ultimately, even though I'd rather not know anything about Photon's internals, this turned out to be advantageous. It led me to find the override_raw_data_fetch filter, which allowed me to control the image loading process.

Photon up and running

Check this video, where I get the Photon service up and running and where I explain some extra things.

There are things to improve, but after weighing the trade-offs between development time, complexity, and the specific needs of my project, I settled to stop at this point.

I might return to Photon and implement some image caching and other good things.

Serving the WordPress images: different approaches

Depending on yor WordPress installation, you might opt to modify the attachment URLs with a function like wp_get_attachment_image_src, use a rewrite rule to redirect all images to the container, or set up a proxy.

All are valid solutions in different circumstances.

Integrating Photon with DDEV

As the project I'm working on uses DDEV, it made sense to add it as an additional service rather than run it as a standalone Docker container outside DDEV.

Creating an Additional Docker Compose File

This can be achieved by creating an additional Docker Compose file.

While the documentation on setting up additional services is brief, they do have a contribution repository with plenty of examples.

version: '3.6'
 
services:
photon:
container_name: ddev-${DDEV_SITENAME}-photon
hostname: ${DDEV_PROJECT}-photon
build: ./photon/
expose:
- '80'
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=8078:80
- HTTPS_EXPOSE=8079:80
- SERVER_NAME=ddev-${DDEV_PROJECT}-photon
volumes:
- ../public/wp-content/uploads:/var/www/html/uploads

Nginx Configuration: Proxy or Redirect

It also required the modification of the Nginx config file. I opted to go with a proxy.

location ~ /wp-content/uploads/.*(jpe?g|png|webp|gif)$ {
rewrite /wp-content/uploads/(.*)$ /$1 break;
proxy_pass http://photon;
}

But I had it working as redirect before that:

location ~* (.*/wp-content/uploads/)(.*jpe?g|png|webp|gif)$ {
return 307 https://$host:8079/$2$is_args$args;
}

The 307 status is "Temporary Redirect"; not that it would matter locally.

See more on the 2nd page