implenton https://implenton.com Thu, 04 Dec 2025 14:21:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://implenton.com/wp-content/uploads/cropped-implenton-logo-32x32.png implenton https://implenton.com 32 32 Config placement using the WP-CLI anchor https://implenton.com/config-placement-using-the-wp-cli-anchor/ Wed, 03 Dec 2025 20:39:37 +0000 https://implenton.com/?p=530 Why does the wp config set WP-CLI command add a constant or variable where it does?

wp config set DB_ENGINE sqlite

This command will add the constant to the wp-config.php file right before the “That’s all, stop editing!” line, exactly where WordPress suggests people should stop adding “custom values”:

/* Add any custom values between this line and the "stop editing" line. */

define( 'DB_ENGINE', 'sqlite' );
/* That's all, stop editing! Happy publishing. */

“Behind the scenes”, the wp-cli/config-command leverages the WP Config Transformer to “programmatically edit a wp-config.php file”. By default, it treats that comment as its “placement anchor string”,

Control where the config is added

However, if you want to, you can control where the constant is added.

For example, you might think the DB_ENGINE constant would be better placed with the other database settings.

Using the anchor option, you can do exactly that:

wp config set DB_ENGINE sqlite --anchor="/** The name of the database for WordPress */"

This adds the constant right before the specified string:

// ** Database settings - You can get this info from your web host ** //
define( 'DB_ENGINE', 'sqlite' );
/** The name of the database for WordPress */
define( 'DB_NAME', 'your_db_name' );

With the separator option, you can also add some breathing room:

wp config set DB_ENGINE sqlite --anchor="/** The name of the database for WordPress */" --separator="\n\n"

Like so:

// ** Database settings - You can get this info from your web host ** //
define( 'DB_ENGINE', 'sqlite' );

/** The name of the database for WordPress */
define( 'DB_NAME', 'your_db_name' );

For even more flexibility, you can use the placement option to control whether new values are placed before or after the anchor.


While the WP Config Transformer can insert constants and variables (--type=variable) before or after a given string, it can’t insert arbitrary PHP comments or code. (Although, it would be useful in many cases.)

Its main purpose isn’t to be a “general text editor”. Most of its code focuses on normalization, validation, ensuring there are no duplicate constants, and so on.

]]>
REST API driven Block Editor UI or how to remove the Author selector https://implenton.com/rest-api-driven-block-editor-ui-or-how-to-remove-the-author-selector/ Fri, 01 Nov 2024 16:50:28 +0000 https://implenton.com/2024/11/01/rest-api-driven-block-editor-ui-or-how-to-remove-the-author-selector/ How would you go about removing certain UI elements from the Block Editor? Suppose you want to remove the default Author selector to replace it with a component that supports multiple authors.

In the pre-Gutenberg era, or if you are still using the Classic Editor (or ClassicPress), you could remove the author selector by removing the meta box containing it:

remove_meta_box( 'authordiv', 'post', 'normal' );

After removing the default, you can register a new meta box with add_meta_box.

This approach seems straightforward: you can add, remove, or alter UI elements with specific functions and filters without affecting other parts of the system.

How do you do the same in the Gutenberg era?

Gutenberg has something similar to “meta boxes”, called Panels. Pretty much the entire Settings Sidebar consists of these.

But this information doesn’t help much, as the PostAuthorPanel is included in the PostSummary in such a way that there’s no way to remove it with any functions or filters.

(It’s slightly misleading that the PostAuthorPanel is not a “panel”, or at least it doesn’t resamble one. Panels are defined as a components that “[…] expand and collapse multiple sections of content”.)

Besides the PostAuthorPanel component there’s the PostAuthor. This “renders the component for selecting the post author” and is wrapped in the PostAuthorCheck.

Imagine the components nested within each other as shown here:

<PostSummary>
    <PostAuthorPanel>
        <PostAuthorCheck>
            <PostAuthor />
        </PostAuthorCheck>
    </PostAuthorPanel>
</PostSummary>

This PostAuthorCheck makes sure that it only “renders its children only if the post type supports the author”.

This suggests that we may need to remove the author support:

remove_post_type_support( 'post', 'author' );

However, this is quite drastic. Removing the support has various effects, from removing the author column from the post listing to disabling aspects in the REST API.

Not ideal.

How does the Block Editor determine whether a post type supports authors? This is truly the paramount question.

It has something to do with the post support, but not in the sense that the Block Editor directly checks whether the post supports the author.

It does it in a more indirect way.

It checks the existance of wp:action-assign-author in the REST API’s _links.

More or less, it does it like so:

const hasAssignAuthorAction =
    !!wp.data.select('core/editor').getCurrentPost()._links[
        'wp:action-assign-author'
    ] ?? false;

This is a very different way of thinking. We have parts of the Block Editor UI that are driven by the REST API.

The solution to remove the author “panel” is to keep the post support (avoiding side effects) but modify the REST API response to remove the https://api.w.org/action-assign-author link.

This is what, for example, the Co-Authors Plus does as well.


This is nothing new. It’s been like this since 2018. And if anyone wants to see what spawned this need and which options were initially discussed they refer to the issue #6361.

]]>
WordPress hooks, method visibility and converting a callable into closure https://implenton.com/wordpress-hooks-method-visibility-and-converting-a-callable-into-closure/ Tue, 30 Jul 2024 16:50:28 +0000 https://implenton.com/2024/07/30/wordpress-hooks-method-visibility-and-converting-a-callable-into-closure/ The documentation page of Actions details what actions are and how to use them.

Probably, for simplicity’s sake, they only give examples with named functions:

function wpdocs_save_posts() {
    // do something
}

add_action( 'init', 'wpdocs_save_posts' );

Those who may wonder how actions work in a class can find an example on the add_action function reference page, under the user-contributed notes.

Here’s the contributed example with the note:

To use add_action() when your plugin or theme is built using classes, you need to use the array callable syntax. You would pass the function to add_action() as an array, with $this as the first element, then the name of the class method…

class WP_Docs_Class {
    public function __construct() {
        add_action( 'save_post', array( $this, 'wpdocs_save_posts' ) );
    }

    public function wpdocs_save_posts() {
        // do stuff here...
    }
}

$wpdocsclass = new WP_Docs_Class();

Hooks in the constructor

Further down the page, another contributor, Bartek, rightfully points out that this upvoted example promotes a bad practice:

I urge you, don’t attach your hook callbacks inside class’ constructor. Instead of implementing official example most upvoted in this thread, opt for decoupled solution. You have one more line of code to write, but objects become more reusable and less error-prone (consider, what would happen if you call new WP_Docs_Class() twice in your code, following Codex example).

class WP_Docs_Class {
    public function hooks() {
        add_action( 'save_post', array( $this, 'wpdocs_save_posts' ) );
    }

    public function wpdocs_save_posts() {
        // do stuff here...
    }
}

$wpdocsclass = new WP_Docs_Class();
$wpdocsclass->hooks();

To answer the rhetoical question, the wpdocs_save_posts funtion would be called twice. Depending on what we do in the wpdocs_save_posts it might be more or less problematic.

The visiblity of the callback method

There’s one more important detail to unpack about how hooks are used in classes, and that’s the visibility of the callback methods.

We see in the example that wpdocs_save_posts is set to public. If that’s changed to private or protected, a fatal error is thrown:

Fatal error: Uncaught TypeError: call_user_func_array(): Argument #1 ($callback) must be a valid callback, cannot access private method

Typically, we give a lot of importance to the visibility of our methods.

If the method is public, that means it’s callable anytime, anywhere, by anybody.

$wpdocsclass = new WP_Docs_Class();
$wpdocsclass->wpdocs_save_posts();

There’s a chance that’s not what we want with WordPress actions.

We might want the method to be called only by the action, but we have to keep it public because otherwise it would not work.

There’s actually a way to solve this.

Converting a callable into a closure

We can wrap the callback in the Closure class’ static fromCallable method.

With this, we can change the wpdocs_save_posts to private (or protected):

class WP_Docs_Class {
    public function hooks() {
        add_action( 'save_post', \Closure::fromCallable( ( array( $this, 'wpdocs_save_posts' ) ) );
    }

    private function wpdocs_save_posts() {
        // do stuff here...
    }
}

Or we can use the first class callable syntax which is avaliable since PHP 8.1:

class WP_Docs_Class {
    public function hooks() {
        add_action( 'save_post', ( array( $this, 'wpdocs_save_posts' ) )( ... );
    }

    private function wpdocs_save_posts() {
        // do stuff here...
    }
}

While we reduced the chance of others calling methods we do not want, one of the downsides appears when we want to test these.

Likely these “action” methods are the “primary” methods on the class, and if they are private they are harder but not impossible to test.

]]>
Quality Insights Toolkit, a testing platform from WooCommerce https://implenton.com/quality-insights-toolkit-a-testing-platform-from-woocommerce/ Mon, 29 Jul 2024 16:50:28 +0000 https://implenton.com/2024/07/29/quality-insights-toolkit-a-testing-platform-from-woocommerce/ It’s great to see how automation is utilized to ensure code quality in the WordPress space.

For example, over the years, the WordPress Plugin Review team has shared insights how they use automation in their review process. There are plenty of posts on Make WordPress.org, such as Reducing the Plugin Review Team’s Workload through Automation and Automatically Catching Bugs in Plugins, and others.

To make things easy, most of the rules and checks have been bundled in the Plugin Check (PCP) plugin. You don’t need any setup to verify your work.

There have also been other initiatives, like Tide, which aimed to “raise the quality of code one plugin or theme at a time by elevating the importance of code quality in the developer consciousness.”

Most of these primarily run the WordPress Coding Standards and PHPCompatibilityWP.

Quality Insights Toolkit (QIT)

Outside the Core, there are some interesting initiatives popping up.

There’s the the Quality Insights Toolkit (QIT) from WooCommerce, which provides current and prospective extension developers with managed automated tests. Currently, this is limited to the Woo Marketplace, but likely will be avaliable to the public.

This toolkit goes beyond the mentioned tools, PHPCS and PHPCompatibility, and introduces tools and checks that are less known and used in the WordPress space.

It offers E2E and API testing with Playwright, which is extensively used in Gutenberg but is just gaining steam in the core.

Additionally, QIT uses Semgrep and PHPStan for static analysis.

It’s worth checking out this initiative and integrating some of the tools they are using in our workflow. There’s a good chance they will gradually make their way into the Core over time.

I’m interested in finding out how they use Semgrep and the benefits we can get from it, as that’s something I’m currently not using for my work.

]]>
GitHub Action inputs and array values https://implenton.com/github-action-inputs-and-array-values/ Wed, 06 Dec 2023 16:50:28 +0000 https://implenton.com/2023/12/06/github-action-inputs-and-array-values/ In GitHub Actions there are many places where we can use arrays.

For example, when using the matrix strategy:

jobs:
    example_matrix:
        strategy:
            matrix:
                version: [10, 12, 14]

Or when we want to filter whats triggers the workflow:

on:
    push:
        branches:
            - main
            - 'releases/**'

But, we are limited to using only strings, numbers, and booleans for input values.

The following are not valid options according to the spec:

- uses: 'example-action@v1'
    with:
        keys: ['foo', 'bar']
- uses: 'example-action@v1'
    with:
        keys:
            - foo
            - bar

We can use multiline strings if we want something close to multiline values.

- uses: 'example-action@v1'
    with:
        keys: |-
            foo
            bar

But the value of keys is still a string; it only contains new lines.

We have to turn it intro a proper array.

If we use JavaScript actions, it will look something similar:

import core from "@actions/core";

const keys = core.getInput("keys")
    .split(/[\r\n]/)
    .map(input => input.trim())
    .filter(input => input !== '');

It’s an important but small detail that we used |- in the YAML and not |.

We can control how we treat the final line break in multiline strings.

Chomping controls how final line breaks and trailing empty lines are interpreted.

For comparison, here’s the YAML converted into JSON:

strip: |-
    foo
clip: |
    bar
{
    "strip": "foo",
    "clip": "bar\n"
}
]]>
Install a WordPress plugin from a self-hosted ZIP file using Composer https://implenton.com/install-a-wordpress-plugin-from-a-self-hosted-zip-file-using-composer/ Tue, 31 Oct 2023 16:50:28 +0000 https://implenton.com/2023/10/31/install-a-wordpress-plugin-from-a-self-hosted-zip-file-using-composer/ With tools like WP Starter or Bedrock, we can manage WordPress sites with Composer easily.

Thanks to WordPress Packagist, when we want to install a plugin from the WordPress.org Plugin Directory, it’s just a matter of requiring them:

composer require wpackagist-plugin/akismet

However, things are more complicated when we are dealing with commercial plugins.

Those are not listed on WordPress Packagist, and most lack Composer support. Some do, for example, Yoast SEO.

We can get around this using Private Packagist. That’s the most convenient way. Although it’s a paid service, its convenience makes it a worthy investment. Moreover, it’s a solid way to support this crucial project within the PHP ecosystem.

But there are other ways.

Typically, when we purchase a commercial plugin, we can download it as a ZIP file, but we rarely get access to a private Git repository where the source code is.

We can take advantage of this and self-host the plugin’s ZIP files. We can simply upload them to our server.

Once we do that, we can make Composer aware of it. This is possible by registering a package as the type of package. Here’s how it looks:

{
    "require": {
        "example/foo": "1.2.1"
    },
    "repositories": [
        {
            "type": "package",
            "package": {
                "name": "example/foo",
                "version": "1.2.1",
                "type": "wordpress-plugin",
                "dist": {
                    "url": "https://example.org/restricted/foo-1.2.1.zip",
                    "type": "zip"
                }
            }
        }
    ]
}

With this configuration, Composer treats it as any other package, and we can install it as example/foo.

Likely, we don’t want to keep these files accessible to the public. We can protect these files with Basic Authentification.

Composer knows how to deal with it out of the box.

To provide the credentials, we can create an auth.json:

{
    "http-basic": {
        "example.org": {
            "username": "client",
            "password": "kN2gyd7bDjOAYXUB6xv9HC"
        }
    }
}
]]>
Provide a Laravel-like facade for your application https://implenton.com/provide-a-laravel-like-facade-for-your-application/ Fri, 08 Sep 2023 16:50:28 +0000 https://implenton.com/2023/09/08/provide-a-laravel-like-facade-for-your-application/ Facades in Laravel have a specific meaning and should not be confused with the facade pattern. According to the Laravel documentation:

Facades provide a “static” interface to classes that are available in the application’s service container.

If this is confusing, it becomes clear when you understand the problem facades solve and how they do it.

The problem facades solve

Let’s consider a scenario where we have an SVG Loader interface and its implementation with multiple dependencies.

namespace Svg;

interface Loader {
    public function byName(string $name): string;
}

class LocalLoader implements Loader {
    // ...
}
$svgLoader = new Svg\LocalLoader(
    __DIR__ . '/assets/svg',
    new Svg\Normalizer\Bundle(
        new Svg\Normalizer\WhitespaceNormalizer(),
        new Svg\Normalizer\SizeAttributeNormalizer(),
        // ...
    )
);

$svgLoader->byName('mastodon');

We don’t instantiate this and other objects repeatedly, but we have a PSR-11 container that takes care of it.

We could use a container that supports auto-wiring or configure it. PSR-11 doesn’t define how you add things to the container.

Wherever we want to use the SVG Loader, we inject it as a dependency:

readonly class HtmlComponent {
    public function __construct(
        private Svg\Loader $svgLoader,
        private Template\Renderer $templateRenderer
    ) {
    }

    // ...
}

This approach is sound and generally praised because making dependencies explicit is extremely important.

There are other considerations than making dependencies crystal clear, and sometimes, for reasons, a more concise syntax is prefered:

Facade\Svg::byName('mastodon');

That’s what Laravel-like facades “solve”. They provide a way to do this. From the docs again:

Laravel facades serve as “static proxies” to underlying classes in the service container, providing the benefit of a terse, expressive syntax …

An MVP implementation

We don’t need Laravel to have facades; a basic implementation is surprisingly simple.

If we would want to provide a facade for exactly one class and one method, we could do this:

namespace Facade

class Svg {
    public static function byName(string $name): string {
        // container() -> \Psr\Container\ContainerInterface
        $svgLoader = container()->get(
            // We used the FQN as the ID for the container
            Svg\Loader::class
        );

        return $svgLoader->byName($name);
    }
}

We need a facade class with the same method name as the proxied, but this time static.

(If we have multiple methods, then it’s quite clear we will end up with quite a lot of duplication and maintenance overhead.)

The service locator pattern

When we are calling container(), we are accessing the container that implements ContainerInterface.

It doesn’t have to be a function; it can be done in many ways, arguably some worse than others:

$instance = container()->get($id);
// or
$instance = Container::services()->get($id);
// or
global $container;

$instance = $container->get($id);
// or other way

This is the part that is controversial and why some dislike facades.

Ultimately, we are providing another “syntax” to grab objects from the application container from anywhere, anytime.

This is the service locator pattern hidden from plain sight.

The Laravel documentation provides some warnings and thoughtful advice:

However, some care must be taken when using facades. The primary danger of facades is class “scope creep”. Since facades are so easy to use and do not require injection, it can be easy to let your classes continue to grow and use many facades in a single class.

A more flexible approach

Typically, we will want facades for multiple classes with various methods.

Here’s a possible implementation of a more flexible solution:

namespace Facade;

abstract class Facade
{
    abstract protected static function proxiedId(): string;

    public static function __callStatic(string $name, array $arguments)
    {
        $instance = container()->get(
            static::proxiedId()
        );
        

        return $instance->$name(...$arguments);
    }
}

/**
 * @method static string byName(string $name)
 * 
 * @see Svg\Loader
 */
class Svg extends Facade
{
    protected static function proxiedId(): string
    {
        // We used the FQN as the ID for the container
        return Svg\Loader::class;
    }
}

Compared to what we had, we replaced the “duplicated” method(s) with the magic method.

The consequence of this is losing the autocomplete in the IDEs. To overcome this, we added clues using DocBlocks.

Some duplication is necessary, but considering all, DocBlocks provides fewer headaches – and it’s not even required; it’s a convenience.

With the introduction of the Facade\Facade base class, we simplified things even more while allowing us to provide helper methods for all facades.

Last words

Of course, Laravel’s implementation is far more complex; it provides performance optimizations, has error checking, etc., which you should do in an actual project. But at the end of the day, this is the core of it.

Testability

Interestingly, by far, most of the code in Laravel’s Facade class is about providing a way to test them.

All facades have methods like expects, shouldReceive, spy, which might sound familiar because they are from Mockery.

Whenever you call a method like expects(), that is “proxied” to Mockery, which allows you to set up expectations “as usual”.

namespace Illuminate\Support\Facades;

Svg extends Facade {
    // ...
}

Svg::shouldReceive('byName')
   ->with('mastodon')
   ->andReturn('<svg>...</svg>');

Those who argue that facades make the code hard to test or even untestable in Laravel might not know this.


How to replicate this behavior or how to test the MVP implementation is for another article, but it’s definitely possible.

]]>
Declarative and component-based UI with Signals core https://implenton.com/declarative-and-component-based-ui-with-signals-core/ Fri, 04 Aug 2023 16:50:28 +0000 https://implenton.com/2023/08/04/declarative-and-component-based-ui-with-signals-core/ Frameworks like Alpine and Stimulus continue to enjoy a widespread appeal. I’m a big fan too.

Their declarative and component-based approach to UI, which is also characteristic of React, Vue.js, Svelte, etc., is enjoyable and offers easy-to-understand patterns.

<div x-data="{ count: 0 }">
    <button x-on:click="count++">Increment</button>

    <span x-text="count"></span>
</div>

However, there are instances where the HTML cannot be “decorated” with attributes of this kind. Or perhaps you dislike “polluting” the DOM or prefer less magic and to stay closer to barebones JavaScript.

If that’s the case, but you still want state-driven reactive UIs, using Signals at the core of your components might be the solution.

Signals

@preact/signals-core is a good choice for state management and to be the driving force behind the UI and DOM updates. Using the signal and effect functions provides you with the low-level API necessary for creating reactive components.

import { signal, effect } from '@preact/signals-core';

const name = signal('Jane');

// Logs name every time it changes:
effect(() => console.log(name.value));
// Logs: "Jane"

// Updating `name` triggers the effect again:
name.value = 'John';
// Logs: "John"

This simplicity is deceptively powerful.

If you are familiar with any of the previously mentioned frameworks, the following examples will feel familiar.

Compared to Alpine

The chosen components are from the Alpine getting started page, where they build the same UI elements “their way”. This way, you can compare the two easily.

Without further due, here’s the counter:

<div class="counter">
    <button type="button">Increment</button>
    <span></span>
</div>
function counter(rootElement) {
    const count = signal(0);

    const displayCount = () => {
        rootElement.querySelector('span').innerHTML = count.value;
    };

    const handleIncrementCount = () => {
        count.value = count.value + 1;
    };

    const init = () => {
        effect(displayCount);

        rootElement
            .querySelector('button')
            .addEventListener('click', handleIncrementCount);
    };

    return {
        init,
    };
}

counter(document.querySelector('.counter')).init();

Because we used a signal in the displayCount, all we had to do is to “wrap” the displayCount in an effect.

As the documentation says:

To run arbitrary code in response to signal changes, we can use effect(fn). […], effects track which signals are accessed and re-run their callback when those signals change.

You are not alone if this reminds you of some sorts of proxy state. It’s a bit like that but technically very different.

You can also play with it on CodePen.

This would be the dropdown:

<div class="dropdown">
    <button type="button">Toggle</button>
    <div>Contents...</div>
</div>
function dropdown(rootElement) {
    const open = signal(false);

    const displayContent = () => {
        rootElement.querySelector('div').style.display = open.value
            ? ''
            : 'none';
    };

    const handleClickOutside = (event) => {
        if (rootElement.contains(event.target)) {
            return;
        }

        open.value = false;
    };

    const handleToggleOpen = () => {
        open.value = !open.value;
    };

    const init = () => {
        effect(displayContent);

        rootElement
            .querySelector('button')
            .addEventListener('click', handleToggleOpen);

        document.addEventListener('click', handleClickOutside);
    };

    return {
        init,
    };
}

Last but not least, the search input:

<div class="search">
    <input type="search" placeholder="Search...">
    <ul>
    </ul>
</div>
function search(rootElement) {
    const items = ['foo', 'bar', 'baz'];
    const search = signal('');
    const matchedItems = computed(() =>
        items.filter((item) => item.startsWith(search.value)),
    );

    const displayResults = () => {
        rootElement.querySelector('ul').innerHTML = matchedItems.value
            .map((item) => `<li>${item}</li>`)
            .join('');
    };

    const handleQueryChange = (event) => {
        search.value = event.target.value;
    };

    const init = () => {
        effect(displayResults);

        rootElement
            .querySelector('input')
            .addEventListener('keyup', handleQueryChange);
    };

    return {
        init,
    };
}

Of course, these are all naive implementations and are not handling cases where things could go wrong, but they should demonstrate how things could be structured and glued together.


For more complex situations, you can consider @deepsignal which extends Signals. It allows the state to be written in the following way:

import { deepSignal } from '@deepsignal/preact';

const userStore = deepSignal({
    name: {
        first: 'Thor',
        last: 'Odinson',
    },
    email: 'thor@avengers.org',
});

The WordPress Interactivity API builds both on Signals and DeepSignal (and Preact).

It’s the first time WordPress has tried to offer some standardization JavaScript used on the front-end.

]]>
Plugin life support with E2E tests https://implenton.com/plugin-life-support-with-e2e-tests/ Wed, 05 Jul 2023 16:50:28 +0000 https://implenton.com/2023/07/05/plugin-life-support-with-e2e-tests/ Those who have plugins in the WordPress Plugin Directory and rely on manual testing face a challenge in avoiding the plugins being marked as “out of date”.

When this occurs, a notice is displayed:

This plugin hasn’t been tested with the latest 3 major releases of WordPress. It may no longer be maintained or supported and may have compatibility issues when used with more recent versions of WordPress.

This is often interpreted as a sign that the plugin has been “abandoned.”

A lot has been said about “abandoned projects” in the open-source world, including the WordPress space. Various ideas have been proposed to address this issue, from promoting plugin adoption to creating maintenance programs.

Are we in a better situation than we were ten years ago?

One thing is for sure: there’s no way around testing. The only way to know something is still working as expected, as advertised, is by somehow checking it.

Manual testing over time is draining, unsustainable, and definitely not scalable. And I would go as far as to say the reason for “abandonment” in some cases.

The best chance: E2E tests

Among the many types of automated tests, end-to-end (E2E) can be deployed the easiest. It is the least invasive and does not require any upfront code changes.

Since most of the plugins were released, E2E testing has become easier and more accessible. And we are close to some reliable no-code, “AI” assisted E2E tools and solutions.

But until then, the most future-proof solution is to use Playwright. This is the tool that WordPress core plans to migrate to, and it is already being used by Gutenberg.

Maintaining the tests for others

While updating a small plugin, I added some E2E tests. However, they were not merged because the maintainer did not want to handle the complexity of the tool.

This decision was completely fair, considering the size and usage of the plugin, among other factors.

Nevertheless, the E2E tests themselves are not the problem; it’s the maintainability of the tool. That’s why I decided not to discard the tests and took on the responsibility of maintaining the Playwright setup myself.

My plan is to run the tests regularly and notify the developer if anything breaks. Likewise, maybe give them a ping that everything is smooth. Finger crossed.

Who knows, I might add tests for other plugins over time. There are a few plugins that I would like to see around in the next few years.

If nothing else, when the time eventually comes, these tests can provide a solid foundation for forking the project, refactoring it, and adding new features.

]]>
Long live Comment Saver https://implenton.com/long-live-comment-saver/ Fri, 30 Jun 2023 16:50:28 +0000 https://implenton.com/2023/06/30/long-live-comment-saver/ There’s a plugin called Comment Saver, developed by Will Norris and released in 2008. Until a few weeks ago, it “officially” only supported WordPress 2.8.

I stumbled upon it accidentally while exploring someone’s WordPress installation, and to my surprise, the plugin still functioned, more or less.

By “more or less,” I mean it worked when the debug mode was turned off. Enabling debug mode caused it to throw some warnings and deprecation notices, breaking comment submission because …

Never mind, because it’s no longer the case, as the issue has been fixed. Besides this, the dependency on jQuery was dropped. Now it can be used without adding extra fluff.

The plugin is ready to last another 15 years. Or not.

While I made these small changes, Will put as much effort into it. He dug up the repo and made it available on GitHub. Reviewed the submitted code and added some automation for the release. All this even though he is no longer connected to the WordPress space.

The plugin is tiny – maybe 200 lines of code at most. The E2E test I wrote for it, along with its configuration code, ended up being more than double, or even triple, the size of the plugin itself.

It’s understandable that Will didn’t want to take on that added complexity as a burden, so that part never got merged.

This got me thinking, and it gave me an idea.

]]>