#12 Add a comment to new Lua %post about rpm-ostree
Opened 2 months ago by walters. Modified 2 months ago
rpms/ walters/glibc comment-lua-rpm-ostree  into  master

file modified
+7 -1

@@ -1745,7 +1745,13 @@ 

  

  %post -p <lua>

  -- We use lua's posix.exec because there may be no shell that we can

- -- run during glibc upgrade.

+ -- run during glibc upgrade.  This works around deficiencies in

+ -- traditional librpm based in-place updates.

+ -- Note this code will be ignored by rpm-ostree; see

+ -- https://github.com/projectatomic/rpm-ostree/pull/1869

+ -- If you change this to do something other than work around librpm

+ -- in-place updates, consider writing it in shell script as

+ -- a separate %post or so.

  function post_exec (program, ...)

    local pid = posix.fork ()

    if pid == 0 then

Note this code will be ignored by rpm-ostree; see
https://github.com/projectatomic/rpm-ostree/pull/1869
If you change this to do something other than work around librpm
in-place updates, consider writing it in shell script as
a separate %post or so.

Sorry, the comment isn't correct. The iconvconfig invocation at the end of the script is not related to RPM limitations.

We could perhaps run iconvconfig during the build and install the result because users are unlikely to update the modules locally. If we can assume that gconv modules are properly packaged, we could use file triggers to rebuild the cache.

If we can assume that gconv modules are properly packaged, we could use file triggers to rebuild the cache.

This seems like a good use of file triggers indeed. As far as "properly packaged"; that's orthogonal to this right? If a user drops custom modules there even via RPM, the burden is on them to run iconvconfig regardless today, because it's very likely glibc's %post runs before their module is on disk.

(rpm-ostree entirely avoids that problem too by laying out all files, then running all scripts)

I was concerned about the unpackaged case, where the DSO is simply dropped into the relevant directory. This will stop working if we start pre-building the cache file. Worse, the cache will be replaced by each glibc update.

I was concerned about the unpackaged case, where the DSO is simply dropped into the relevant directory. This will stop working if we start pre-building the cache file. Worse, the cache will be replaced by each glibc update.

Right, we do need to consider the unpackaged case, just like we do for locales.

  • User installs gconv module, runs iconvconfig once, and everything should keep working into the future.

  • User installs pre-built locale, runs localedef --add-to-archive, and everything should keep working into the future (though with recent changes they may need to regenerate the cache with their own triggers to get the most optimal archive setup, but that's just an optimization).

In both cases users drop files into the expected locations and it works without fail.

I was concerned about the unpackaged case, where the DSO is simply dropped into the relevant directory. This will stop working if we start pre-building the cache file.

Remember that for image-based systems (rpm-ostree and the container base image), this is already the case.

$ podman run --rm -ti registry.fedoraproject.org/fedora:30 ls -al /usr/lib64/gconv/gconv-modules.cache
-rw-r--r--. 1 root root 26398 Mar 24 12:23 /usr/lib64/gconv/gconv-modules.cache
$

In the container base image case, the cache will be regenerated only when glibc is upgraded. For rpm-ostree, /usr is read-only by default; any extensions currently need to be done via layered RPMs, and doing so would have "cache only updated when glibc is" semantics, same as the container base.

The solution here is file triggers I think, not including the cache in the RPM itself.