[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Build refactor and dockerisation of Mediagoblin

From: Olivier Mehani
Subject: Re: Build refactor and dockerisation of Mediagoblin
Date: Tue, 24 May 2022 23:48:27 +1000

Hi again!

Some progress on this.

On Sun 02 Jan 2022 at 12:30:28 +1100, Olivier Mehani wrote:
I have been working on trying to refactor the build system and better dockerise the Mediagoblin source for a few months, and have something that is _almost_ working [0].

tl;dr: I have a container image that works, and I'm polishing the branch [0] in the hope of merging soon. Try this:

    docker run -it --rm -p 6543:6543 -v ${PWD}/data:/srv:z omehani/mediagoblin

# For users

What I want users to be able to do is to just grab a template `docker-compose.yml`, and do a `docker-compose up` to have a ready-to-use, non-lazy, MG instance.

By default, it would store config, DB, and media in an FS volume mounted from the host into the container. Importantly, the container would create all those by default on first run if missing. Another important feature is that the container should migrate the DB whenever needed [e.g., like the Paperless-NG containers 1], so an MG upgrade would simply be a `docker-compose pull; docker-compose down; docker-compose up`.

That's pretty much done.

I pushed the image to Docker Hub [1]. You can use it in a pinch with something like

    docker run -it --rm -p 6543:6543 -v ${PWD}/data:/srv:z omehani/mediagoblin

It will run a lazyserver in a single container, create stub config in the local `data` directory, as well as put all media files there. If you kill and restart the container, all persistence will be thanks to this `data` directory.

I'm building the MG image itself to start the lazy-server by default, but the docker-compose should ultimately start two containers from the same image, for paste and celery. It would also start a rabbimq container from upstream images.

At this stage, the single-container docker image seems pretty good, but the docker-compose bit is still not done. That single-container image is probably something we'd want to push to Docker hub every release.

This now works.

If you want to run a full stack with a separate celery and rabbit-mq, you can use the docker-compose.yml file [2].

    docker compose up

You might have to alter the docker-compose.yml file to mount the `data` volume [3]. As I've been playing, not entirely successfully yet, with deploys to AWS ECS, I realised that this directive was not supported there, and led to a silent failure. I distilled the base docker-compose.yml to the bare minimum, and additional configuration (such as volumes) can alternatively be passed via a separate compose file,

    docker compose -f docker-compose.yml -f docker-compose.local.yml up

It also relies on an env file that shows all (some of) the variables that can be overriden in the entrypoint. This includes gnarly and ugly things such as module configuration, but heh, that works (for now) [4]. Most of those are only used to create a new `mediagoblin.ini`. Existing config will not be clobbered unless `SKIP_RECONFIG` is set to not `true`.

# For developers

On the build side, I tried to move as much of the logic as I could outside of the Makefile, and straight into the Python setup, including deps that historically were provided by the system, such as lxml. This means that MG is a lot more like a normal Python app (which is part of the ultimate goal, maybe even pushing it to PyPI).

As I've been iterating through fixing the tests, I have been able to refine the constraints in the setup.cfg, including the errors that are caused by more recent versions [e.g., 5]. We should work on upgrading to as much of the latest versions as possible, and remove the upper-bound version constraints.

Unfortunately, due to GStreamer and GObject, I don't think it is entirely feasible to build and run in a stock Python container, so there still is some (in-container) Debian-magic to install those dependencies from the distro.

Confirmed. While I managed to build and run on a python:3.9 container as a base, I reverted to Debian image due to the dependency on python-gst, which I didn't want to build manually. The Python Docker image ships its own Python separately (in /usr/local) from the base Debian package (in /usr). As a result, it is not able to use the native libs from the python-gst package (in /usr), ultimately leading to the infamous `TypeError: Fraction() takes no parameters` [6], as PyGobject bindings are built and locally available, but the native Gst library from python-gst is not in the expected path.

The idea is also to make the MG image as small as possible, so not shipping builds dependencies (particularly JS ones) seems like a good idea. For those build dependencies (mainly node.js, bower and node_modules), I have a separate build stage which processes the assets, and make them available for Python to package afterwards (this bit works, but it is not thoroughly tested yet).

I resorted to building from debian:bullseye-slim, which has Python 3.9. Due to the use of ABC.Collections, Mediagoblin is currently not compatible with Python 3.10. Another fix that we'll have to work on soonish.

This leaves us with a ~1GiB image, which is not ideal, but hard to beat as Gstreamer's dependencies run deep, all the way down to systemd. This is quite useless in a container, but installed through transitive package requirements nonetheless.

It can also build a Python sdist and dist_wheel, that may or may not be good to push to PyPI (haven't tried yet).

# Not addressed

* The configure script, still...

* the Git submodules: we should probably have a better way to bring...

* the plugins: at the moment, I package Python dependencies for all of...

Next step is me trying to migrate my instance (with data) to ECS, then look at the configure script.

In the meantime, please have a go at running from my Docker image, and report any success/failure. For the brave, you can also try building from my branch, with a simple
        docker build .

There are a couple of --build-arg (at the beginning of the Dockerfile [7]) that you can use to influence what get built. Note, however, that for the moment, anything that gets built is just baked into the final container image.

Olivier Mehani <>
PGP fingerprint: 4435 CF6A 7C8D DD9B E2DE  F5F9 F012 A6E2 98C6 6655
Confidentiality cannot be guaranteed on emails sent or received unencrypted.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]