November 12, 2024

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Complex for Whom?

In basically every engineering organization I’ve ever regarded as particularly high functioning, I’ve sat through one specific recurring conversation which is not – a conversation about “complexity”. Things are good or bad because they are or aren’t complex, architectures needs to be redone because it’s too complex – some refactor of whatever it is won’t work because it’s too complex. You may have even been a part of some of these conversations – or even been the one advocating for simple light-weight solutions. I’ve done it. Many times.

Rarely, if ever, do we talk about complexity within its rightful context – complexity for whom. Is a solution complex because it’s complex for the end user? Is it complex if it’s complex for an API consumer? Is it complex if it’s complex for the person maintaining the API service? Is it complex if it’s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I’ve come to believe, is fairly zero-sum – there’s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own.

That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I’ve been informed that Fred Brooks coined a term for what I call “lower bound complexity” – “Essential Complexity”, in the paper “No Silver Bullet—Essence and Accident in Software Engineering”, which is a better term and can be used interchangeably.

Complexity Culture

In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specalize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization’s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn’t usually as bad as it could be.

When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some asperiational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I’d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can’t you make it less complex?

Right around here is when I start to try and contextualize the conversation happening around me – understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call’s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you?

Frequently it’s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved – or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it’s something downstream consumers are likely to hit, it’s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service

Popoffs about how complex something is, are, to a first approximation, best understood as meaning “complicated for the person making comments”. A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They’re right. Mostly right. This is less complex – less complex for them. It’s not, however, without complexity and its own tradeoffs – it’s just complexity that they do not have to deal with. Now they don’t have to maintain machines that have pesky operating systems or hard drive failures. They don’t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster.

On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the ***** a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) – not to mention all sorts of rules to route packets to their project (a single repo’s binary being run in 3 containers on a single vm host).

Beyond that, there’s the invisible complexity – complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don’t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had.

What’s more complex? An app running in an in-house 4u server racked in the office’s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES

This extends beyond Engineering. Decisions regarding “what tools are we able to use” – be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects – will incur costs in terms of expressed “complexity”. Pinning open source projects to a fixed set makes SBOM production “less complex”. Using only one SaaS provider’s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation “less complex”. If all you have is a contract with Pauly T’s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is “less complex” for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won’t and make it everyone else’s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor.

Suddenly, the decision to “reduce complexity” because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With a large enough organizations (specifically, in this case, i’m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the organization. It’s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this.

I can’t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there’s a decisionmaker optimizing for what they believe to be the least amount of complexity – least hassle, fewest unique cases, most consistency – as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN’T REVIEW)

We wish to rid ourselves of systemic Complexity – after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental complexity” in Brooks’s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else – maybe outside your organization or in a non-engineering function must grow it back. Sometimes, the opposite as is the case when a previously manual buisness processes is automated. Maybe that’s a good idea. Maybe it’s not. All I know is that what doesn’t help the situation is conflating complexity with everything we don’t like – legacy code, maintenance burden or toil, cost, delivery velocity.

  • Complexity is not the same as proclivity to failure. The most reliable systems I’ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.

Next time you’re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they’re saying mean something along the lines of I don’t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don’t see? Should this not solved at all by changing the bounds of what we should accept or redfine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who’s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing?

What can change?

What should change?

12 November, 2024 08:21PM

Sven Hoexter

fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations.

Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst.

First Iteration: Just Run kustomize Like Flux Would Do It

With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    popd
done

Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml

Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work.

Thus we started to catch that as well in our growing for loop:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f ${CLFOLDER}/kustomization.yaml && continue
        test -f ${CLFOLDER}/kustomization.yml && continue
        if [[ $(find ${CLFOLDER} -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f|wc -l) != 0 ]]; then
            echo "Error Cluster ${CLUSTER} folder ${CLFOLDER} lacks a kustomization.yaml"
        fi
    done

    popd
done

Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

12 November, 2024 02:19PM

hackergotchi for James Bromberger

James Bromberger

My own little server

In 2004, I was living in London, and decided it was time I had my own little virtual private server somewhere online. As a Debian developer since the start of 2000, it had to be Debian, and it still is… This was before “cloud” as we know it today. Virtual Private Servers (VPS) was a … Continue reading "My own little server"

12 November, 2024 12:34PM by james

November 11, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

R*****pSpdlog 0.0.19 on CRAN: New Upstream, New Features

Version 0.0.19 of R*****pSpdlog arrived on CRAN early this morning and has been uploaded to Debian. R*****pSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry.

The NEWS entry for this release follows.

Changes in R*****pSpdlog version 0.0.19 (2024-11-10)

  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)

  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)

  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)

  • Partially revert / simplify src/formatter.*****p accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the R*****pSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 November, 2024 05:47PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Why academics under-share research data - A social relational theory

This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology

As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.

Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.

The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.

The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

11 November, 2024 02:53PM

Vincent Bernat

Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.

While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial t*****: lookup proxy.golang.org on [::1]:53: connection refused.

{ pkgs }:
pkgs.stdenv.mkDerivation {
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/[email protected]
    '';
  installPhase = ''
    mkdir -p $out/bin
    ***** caddy $out/bin
  '';
}

Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:

{ stdenv, fetchurl }:
stdenv.mkDerivation rec {
  pname = "hello";
  version = "2.12.1";
  src = fetchurl {
    url = "mirror://gnu/hello/hello-${version}.tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
  };
}

To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy’s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.

pkgs.stdenvNoCC.mkDerivation rec {
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v${version} --with github.com/caddy-dns/[email protected]
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';

  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
}

With a fixed-output derivation, it is up to us to ensure the output is always the same:

  • we ask xcaddy to not compile the program and keep the source code,3
  • we pin the version of Caddy we want to build, and
  • we pin the version of each requested plugin.

You can use this derivation to override the src attribute in pkgs.caddy:

pkgs.caddy.overrideAttrs (prev: {
  src = pkgs.stdenvNoCC.mkDerivation { /* ... */ };
  vendorHash = null;
  subPackages = [ "." ];
});

Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:

{
  inputs = {
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
  };
  outputs = { self, nixpkgs, flake-utils, caddy }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs {
          inherit system;
          overlays = [ caddy.overlays.default ];
        };
      in
      {
        packages = {
          default = pkgs.caddy.withPlugins {
            plugins = [ "github.com/caddy-dns/[email protected]" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
          };
        };
      });
}

  1. This article uses the term “plugins,” though Caddy documentation also refers to them as “modules” since they are implemented as Go modules. ↩︎

  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different. ↩︎

  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail. ↩︎

11 November, 2024 07:35AM by Vincent Bernat

November 10, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

inline 0.3.20: Mostly Maintenance

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by R*****p in the very early days before R*****p Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release was tickled by changing in r-devel just this week, and the corresponding ‘please fix or else’ email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)

  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)

  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository

  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the r*****p-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 November, 2024 07:29PM

Reproducible Builds

Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project.

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Beyond bitwise equality for Reproducible Builds?
  2. ‘Two Ways to Trustworthy’ at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list…
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds?

Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled “Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds”:

The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: “Does build A confirm the integrity of build B?” or “Can build A reveal a compromised build B?”. To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.

A PDF of the paper is freely available.


Two Ways to Trustworthy’ at SeaGL 2024

On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant’s talk:

[…] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust.

Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.


Number of cores affected Android compiler output

Fay Stegerman wrote that the cause of the Android toolchain bug from September’s report that she reported to the Android issue tracker has been found and the bug has been fixed.

the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class’s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of *****U cores used during the build.

To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.


On our mailing list…

On our mailing list this month:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:

  • Ignore errors when listing .ar archives (#1085257). []
  • Don’t try and test with systemd-ukify in the Debian stable distribution. []
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). []

In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [][][] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [][]


IzzyOnDroid passed 25% reproducible apps

The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.


Distribution work

In Debian this month:

  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild’s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results []

  • Recently, a news entry was added to snapshot.debian.org’s homepage, describing the recent changes that made the system stable again:

    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it’s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM’s at LeaseWeb.

    The entry list a number of specific updates surrounding the API endpoints and rate limiting.

  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.

Elsewhere in distribution news, Zbigniew Jędrzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage.

Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.


Website updates

There were an enormous number of improvements made to our website this month, including:

  • Alba Herrerias:

    • Improve consistency across distribution-specific guides. []
    • Fix a number of links on the Contribute page. []
  • Chris Lamb:

  • hulkoba

  • James Addison:

    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [][][][][]
    • On the homepage, link directly to the Projects subpage. []
    • Relocate “dependency-drift” notes to the Volatile inputs page. []
  • Ninette Adhikari:

    • Add a brand new ‘Success stories’ page that “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds”. [][][][][][]
  • Pol Dellaiera:

    • Update the website’s README page for building the website under NixOS. [][][][][]
    • Add a new academic paper citation. []

Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:

  • Add a basic index.html for rebuilderd. []
  • Update the nginx.conf configuration file for rebuilderd. []
  • Document how to use a rescue system for Infomaniak’s OpenStack cloud. []
  • Update usage info for two particular nodes. []
  • Fix up a version skew check to fix the name of the riscv64 architecture. []
  • Update the rebuilderd-related TODO. []

In addition, Mattia Rizzolo added a new IP address for the inos5 node [] and Vagrant Cascadian brought 4 virt nodes back online [].


Supply-chain security at Open Source Summit EU

The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 November, 2024 06:26PM

Thorsten Alteholz

My Debian Activities in October 2024

FTP master

This month I accepted 398 and rejected 22 packages. The overall number of packages that got accepted was 441.

In case your RM bug is not closed within a month, you can assume that either the conversion of the subject of the bug email to the corresponding dak command did not work or you still need to take care of reverse dependencies. The dak command related to your removal bug can be found here.

Unfortunately the bahavior of some project members caused a decline of motivation of team members to work on these bugs. When I look at these bugs, I just copy and paste the above mentioned dak commands. If they don’t work, I don’t have the time to debug what is going wrong. So please read the docs and take care of it yourself. Please also keep in mind that you need to close the bug or set a moreinfo tag if you don’t want anybody to act on your removal bug.

Debian LTS

This was my hundred-twenty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 3925-1] asterisk security update to fix two CVEs related to privilege escalation and DoS
  • [DLA 3940-1] xorg-server update to fix one CVE related to privilege escalation

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1198-1]cups security update for one CVE in Buster to fix the IPP attribute related CVEs.
  • [ELA-1199-1]cups security update for two CVEs in Stretch to fix the IPP attribute related CVEs
  • [ELA-1216-1]graphicsmagick security update for one CVE in Jessie
  • [ELA-1217-1]asterisk security update for two CVEs in Buster related to privilege escalation
  • [ELA-1218-1]asterisk security update for two CVEs in Stretch related to privilege escalation and DoS
  • [ELA-1223-1]xorg-server security update for one CVE in Jessie, Stretch and Buster related to privilege escalation

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Unfortunately I didn’t found any time to work on this topic.

Debian Astro

Unfortunately I didn’t found any time to work on this topic.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

  • pywws (yes, again this month)

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

10 November, 2024 12:26AM by alteholz

November 09, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog.

It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page.

I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx.

Comment posting workflow

I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:

  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.

  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.

  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.

  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.

First step: fetching a comment form

First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:

hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.

issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form

hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)

Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code!

Second step: handling previews

The old Preview Comment page

The old Preview Comment page

In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later.

Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview.

IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment.

The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it.

Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview.

Third step: handling submitted comments

IkiWiki is highly configurable, and many different things could happen once you post a comment.

On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments.

I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat.

handling moderated comments

Moderation message upon submitting a comment

Moderation message upon submitting a comment

One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead.

I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js.

Summary

I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience.

You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is.

Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

09 November, 2024 09:16PM

November 08, 2024

hackergotchi for Thomas Lange

Thomas Lange

Using NIS (Network Information Service) in 2024

The topic of this posting already tells you that an old Unix guy tells stories about old techniques.

I'm a happy NIS (formerly YP) user since 30+ years. I started using it with SunOS 4.0, later using it with Solaris and with Linux since 1999.

In the past, a colleague wasn't happyly using NIS+ when he couldn't log in as root after a short time because of some well known bugs and wrong configs. NIS+ was also much slower than my NIS setup. I know organisations using NIS for more than 80.000 user accounts in 2024.

I know the security implications of NIS but I can live with them, because I manage all computers in the network that have access to the NIS maps. And NIS on Linux offers to use shadow maps, which are only accessible to the root account. My users are forced to use very long passwords.

Unfortunately NIS support for the PAM modules was removed in Debian in pam 1.4.0-13, which means Debian 12 (bookworm) is lacking NIS support in PAM, but otherwise it is still supported. This only affects changing the NIS password via passwd. You can still authenticate users and use other NIS maps.

But yppasswd is deprecated and you should not use it! If you use yppasswd it may generate a new password hash by using the old DES crypt algorithm, which is very weak and only uses the first 8 chars in your password. Do not use yppasswd any more! yppasswd only detects DES, MD5, SHA256 and SHA512 hashes, but for me and some colleagues it only creates weak DES hashes after a password change. yescrypt hashes which are the default in Debian 12 are not supported at all. The solution is to use the plain passwd program.

On the NIS master, you should setup your NIS configuration to use /etc/shadow and /etc/passwd even if your other NIS maps are in /var/yp/src or similar. Make sure to have these lines in your /var/yp/Makefile:

PASSWD      = /etc/passwd
SHADOW      = /etc/shadow

Call make once, and it will generate the shadow and passwd map. You may want to set the variable MINUID which defines which entries are not put into the NIS maps.

On all NIS clients you still need the entries (for passwd, shadow, group,...) that point to the nis service. E.g.:

passwd:         files nis systemd
group:          files nis systemd
shadow:         files nis

You can remove all occurences of "nis" in your /etc/pam.d/common-password file.

Then you can use the plain passwd program to change your password on the NIS master. But this does not call make in /var/yp for updating the NIS shadow map.

Let's use inotify(7) for that. First, create a small shell script /usr/local/sbin/shadow-change:

#! /bin/sh

PATH=/usr/sbin:/usr/bin

# only watch the /etc/shadow file
if [ "$2" != "shadow" ]; then
  exit 0
fi

cd /var/yp || exit 3
sleep 2
make

Then install the package incron.

# apt install incron
# echo root >> /etc/incron.allow
# incrontab -e

Add this line:

/etc    IN_MOVED_TO     /usr/local/sbin/shadow-change $@ $# $%

It's not possible to use IN_MODIFY or watch other events on /etc/shadow directly, because the passwd command creates a /etc/nshadow file, deletes /etc/shadow and then moves nshadow to shadow. inotify on a file does not work after the file was removed.

You can see the logs from incrond by using:

# journalctl _COMM=incrond
e.g.

Oct 01 12:21:56 kueppers incrond[6588]: starting service (version 0.5.12, built on Jan 27 2023 23:08:49)
Oct 01 13:43:55 kueppers incrond[6589]: table for user root created, loading
Oct 01 13:45:42 kueppers incrond[6589]: PATH (/etc) FILE (shadow) EVENT (IN_MOVED_TO)
Oct 01 13:45:42 kueppers incrond[6589]: (root) CMD ( /usr/local/sbin/shadow-change /etc shadow IN_MOVED_TO)

I've disabled the execution of yppasswd using dpkg-divert

# dpkg-divert --local --rename --divert /usr/bin/yppasswd-disable /usr/bin/yppasswd
chmod a-rwx /usr/bin/yppasswd-disable

Do not forget to limit the access to the shadow.byname map in ypserv.conf and general access to NIS in ypserv.securenets.

I've also discovered the package pamtester, which is a nice package for testing your pam configs.

08 November, 2024 12:32PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: October’s report (by Anupa Ann Joseph)

Debian Contributions: 2024-10

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

rebootstrap, by Helmut Grohne

After significant changes earlier this year, the state of architecture cross bootstrap is normalizing again. More and more architectures manage to complete rebootstrap testing successfully again. Here are two examples of what kind of issues the bootstrap testing identifies.

At some point, libpng1.6 would fail to cross build on musl architectures whereas it would succeed on other ones failing to locate zlib. Adding --debug-find to the cmake invocation eventually revealed that it would fail to search in /usr/lib/<triplet>, which is the default library path. This turned out to be a bug in cmake assuming that all linux systems use glibc. libpng1.6 also gained a baseline violation for powerpc and ppc64 by enabling the use of AltiVec there.

The newt package would fail to cross build for many 32-bit architectures whereas it would succeed for armel and armhf due to -Wincompatible-pointer-types. It turns out that this flag was turned into -Werror and it was compiling with a warning earlier. The actual problem is a difference in signedness between wchar_t and FriBidChar (aka uint32_t) and actually affects native building on i386.

Miscellaneous contributions

  • Helmut sent 35 patches for cross build failures.
  • Stefano Rivera uploaded the Python 3.13.0 final release.
  • Stefano continued to rebuild Python packages with C extensions using Python 3.13, to catch compatibility issues before the 3.13-add transition starts.
  • Stefano uploaded new versions of a handful of Python packages, including: dh-python, objgraph, python-mitogen, python-truststore, and python-virtualenv.
  • Stefano packaged a new release of mkdocs-macros-plugin, which required packaging a new Python package for Debian, python-super-collections (now in NEW review).
  • Stefano helped the mini-DebConf Online Brazil get video infrastructure up and running for the event. Unfortunately, Debian’s online-DebConf setup has bitrotted over the last couple of years, and it eventually required new temporary Jitsi and Jibri instances.
  • Colin Watson fixed a number of autopkgtest failures to get ansible back into testing.
  • Colin fixed an ssh client failure in certain cases when using GSS-API key exchange, and added an integration test to ensure this doesn’t regress in future.
  • Colin worked on the Python 3.13 transition, fixing problems related to it in 15 packages. This included upstream work in a number of packages (postgresfixture, python-asyncssh, python-wadllib).
  • Colin upgraded 41 Python packages to new upstream versions.
  • Carles improved po-debconf-manager: now it can create merge requests to Salsa automatically (created 17, new batch coming this month), imported almost all the packages with debconf translation templates whose VCS is Salsa (currently 449 imported), added statistics per package and language, improved command line interface options. Performed user support fixing different issues. Also prepared an abstract for the talk at MiniDebConf Toulouse.
  • Santiago Ruano Rincón continued the organization work for the DebConf 25 conference, to be held in Brest, France. Part of the work relates to the initial edits of the sponsoring brochure. Thanks to Benjamin Somers who finalized the French and English versions.
  • Raphaël forwarded a couple of zim and hamster bugs to the upstream developers, and tried to diagnose a delayed startup of gdm on his laptop (cf #1085633).
  • On behalf of the Debian Publicity Team, Anupa interviewed 7 women from the Debian community, old and new contributors. The interview was published in Bits from Debian.

08 November, 2024 12:00AM by Anupa Ann Joseph

Reproducible Builds (diffoscope)

diffoscope 283 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 283. This version includes the following changes:

[ Martin Abente Lahaye ]
* Fix crash when objdump is missing when checking .EFI files.

You find out more by visiting the project homepage.

08 November, 2024 12:00AM

November 07, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

John Carpenter's "The Fog"

'The Fog' 7 inch vinyl record

A gift from my brother. Coincidentally I’ve had John Carpenter’s “Halloween” echoing around my my head for weeks: I’ve been deconstructing it and trying to learn to play it.

07 November, 2024 09:51AM

November 06, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is Bits from DPL for October. In addition to a summary of my recent activities, I aim to include newsworthy developments within Debian that might be of interest to the broader community. I believe this provides valuable insights and foster a sense of connection across our diverse projects. Also, I welcome your feedback on the format and focus of these Bits, as community input helps shape their value.

Ada Lovelace Day 2024

As outlined in my platform, I'm committed to increasing the diversity of Debian developers. I hope the recent article celebrating Ada Lovelace Day 2024–featuring interviews with women in Debian–will serve as an inspiring motivation for more women to join our community.

MiniDebConf Cambridge

This was my first time attending the MiniDebConf in Cambridge, hosted at the ARM building. I thoroughly enjoyed the welcoming atmosphere of both MiniDebCamp and MiniDebConf. It was wonderful to reconnect with people who hadn't made it to the last two DebConfs, and, as always, there was plenty of hacking, insightful discussions, and valuable learning.

If you missed the recent MiniDebConf, there's a great opportunity to attend the next one in Toulouse. It was recently decided to include a MiniDebCamp beforehand as well.

FTPmaster accepts MRs for DAK

At the recent MiniDebConf in Cambridge, I discussed potential enhancements for DAK to make life easier for both FTP Team members and developers. For those interested, the document "Hacking on DAK" provides guidance on setting up a local DAK instance and developing patches, which can be submitted as MRs.

As a perfectly random example of such improvements some older MR, "Add commands to accept/reject updates from a policy queue" might give you some inspiration.

At MiniDebConf, we compiled an initial list of features that could benefit both the FTP Team and the developer community. While I had preliminary discussions with the FTP Team about these items, not all ideas had consensus. I aim to open a detailed, public discussion to gather broader feedback and reach a consensus on which features to prioritize.

  • Accept+Bug report

Sometimes, packages are rejected not because of DFSG-incompatible licenses but due to other issues that could be resolved within an existing package (as discussed in my DebConf23 BoF, "Chatting with ftpmasters"[1]). During the "Meet the ftpteam" BoF (Log/transcription of the BoF can be found here), for the moment until the MR gets accepted, a new option was proposed for FTP Team members reviewing packages in NEW:

Accept + Bug Report

This option would allow a package to enter Debian (in unstable or experimental) with an automatically filed RC bug report. The RC bug would prevent the package from migrating to testing until the issues are addressed. To ensure compatibility with the BTS, which only accepts bug reports for existing packages, a delayed job (24 hours post-acceptance) would file the bug.

  • Binary name changes - for instance if done to experimental not via new

When binary package names change, currently the package must go through the NEW queue, which can delay the availability of updated libraries. Allowing such packages to bypass the queue could expedite this process. A configuration option to enable this bypass specifically for uploads to experimental may be useful, as it avoids requiring additional technical review for experimental uploads.

Previously, I believed the requirement for binary name changes to pass through NEW was due to a missing feature in DAK, possibly addressable via an MR. However, in discussions with the FTP Team, I learned this is a matter of team policy rather than technical limitation. I haven't found this policy documented, so it may be worth having a community discussion to clarify and reach consensus on how we want to handle binary name changes to get the MR sensibly designed.

  • Remove dependency tree

When a developer requests the removal of a package – whether entirely or for specific architectures – RM bugs must be filed for the package itself as well as for each package depending on it. It would be beneficial if the dependency tree could be automatically resolved, allowing either:

a) the DAK removal tooling to remove the entire dependency tree
   after prompting the bug report author for confirmation, or

b) the system to auto-generate corresponding bug reports for all
   packages in the dependency tree.

The latter option might be better suited for implementation in an MR for reportbug. However, given the possibility of large-scale removals (for example, targeting specific architectures), having appropriate tooling for this would be very beneficial.

In my opinion the proposed DAK enhancements aim to support both FTP Team members and uploading developers. I'd be very pleased if these ideas spark constructive discussion and inspire volunteers to start working on them--possibly even preparing to join the FTP Team.

On the topic of ftpmasters: an ongoing discussion with SPI lawyers is currently reviewing the non-US agreement established 22 years ago. Ideally, this review will lead to a streamlined workflow for ftpmasters, removing certain hurdles that were originally put in place due to legal requirements, which were updated in 2021.

Contacting teams

My outreach efforts to Debian teams have slowed somewhat recently. However, I want to emphasize that anyone from a packaging team is more than welcome to reach out to me directly. My outreach emails aren't following any specific orders--just my own somewhat naïve view of Debian, which I'm eager to make more informed.

Recently, I received two very informative responses: one from the Qt/KDE Team, which thoughtfully compiled input from several team members into a shared document. The other was from the Rust Team, where I received three quick, helpful replies–one of which included an invitation to their upcoming team meeting.

Interesting readings on our mailing lists

I consider the following threads on our mailing list some interesting reading and would like to add some comments.

Sensible languages for *****er contributors

Though the discussion on debian-devel about programming languages took place in September, I recently caught up with it. I strongly believe Debian must continue evolving to stay relevant for the future.

"Everything must change, so that everything can stay the same." -- Giuseppe Tomasi di Lampedusa, The Leopard

I encourage constructive discussions on integrating programming languages in our toolchain that support this evolution.

Concerns regarding the "Open Source AI Definition"

A recent thread on the debian-project list discussed the "Open Source AI Definition". This topic will impact Debian in the future, and we need to reach an informed decision. I'd be glad to see more perspectives in the discussions−particularly on finding a sensible consensus, understanding how FTP Team members view their delegated role, and considering whether their delegation might need adjustments for clarity on this issue.

Kind regards Andreas.

06 November, 2024 11:00PM by Andreas Tille

hackergotchi for Daniel Lange

Daniel Lange

Weird times ... or how the New York DEC decided the US presidential elections

November 2024 will be known as the time when killing peanut, a pet squirrel, by the New York State DEC swung the US presidential elections and shaped history forever.

The hundreds of millions of dollars spent on each side, the tireless campaigning by the candidates, the celebrity endorsements ... all made for an open race for months. Investments evened each other out.

But an OnlyFans producer showing people an overreaching, bureaucracy driven State raiding his home to confiscate a pet squirrel and kill it ... swung enough voters to decide the elections.

That is what we need to understand in times of instant worldwide publication and a mostly attention driven economy: Human fates, elections, economic cycles and wars can be decided by people killing squirrels.

RIP, peanut.

P.S.: Trump Media & Technology Group Corp. (DJT) stock is up 30% pre-market.

06 November, 2024 09:15AM by Daniel Lange

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Making America Great Again

Making America Great Again

Justice For Peanut

Some interesting takeaways (With the caveat that exit polls are not completely accurate and we won't have the full picture for days.)

  • President Trump seems to have won the popular vote which no Republican has done I believe since Reagan.

  • Apparently women didn't particularly care about abortion (CNN said only 14% considered it their primary issue) There is a noticable divide but it is single versus married not women versus men per se.

  • Hispanics who are here legally voted against Hispanics coming here illegally. Latinx's didn't vote for anything because they don't exist.

  • The infamous MSG rally joke had no effect on the voting habits of Puerto Ricans.

  • Republicans have taken the Senate and if trends continue as they are will retain control of the House of Representatives.

  • President Biden may have actually been a better candidate than Border Czar Harris.

06 November, 2024 07:11AM

November 04, 2024

Ravi Dwivedi

Asante Kenya for a Good Time

In September of this year, I visited Kenya to attend the State of the Map conference. I spent six nights in Nairobi, two nights in Mombasa, and one night on a train. I was very happy with the visa process being smooth and quick. Furthermore, I stayed at the Nairobi Transit Hotel with other attendees, with Ibtehal from Bangladesh as my roommate. One of the memorable moments was the time I spent at a local coffee shop nearby. We used to go there at midnight, despite the grating in the shops suggesting such adventures were unsafe. Fortunately, nothing bad happened, and we were rewarded with a fun time with the locals.

The coffee shop Ibtehal and me used to visit during the midnight

Grating at a chemist shop in Mombasa, Kenya

The country lies on the equator, which might give the impression of extremely hot temperatures. However, Nairobi was on the cooler side (10–25 degrees Celsius), and I found myself needing a hoodie, which I bought the next day. It also served as a nice souvenir, as it had an outline of the African map printed on it.

I bought a Safaricom SIM card for 100 shillings and recharged it with 1000 shillings for 8 GB internet with 5G speeds and 400 minutes talk time.

A visit to Nairobi’s Historic Cricket Ground

On this trip, I got a unique souvenir that can’t be purchased from the market—a cricket jersey worn in an ODI match by a player. The story goes as follows: I was roaming around the market with my friend Benson from Nairobi to buy a Kenyan cricket jersey for myself, but we couldn’t find any. So, Benson had the idea of visiting the Nairobi Gymkhana Club, which used to be Kenya’s main cricket ground. It has hosted some historic matches, including the 2003 World Cup match in which Kenya beat the mighty Sri Lankans and the record for the fastest ODI century by Shahid Afridi in just 37 balls in 1996.

Although entry to the club was exclusively for members, I was warmly welcomed by the staff. Upon reaching the cricket ground, I met some Indian players who played in Kenyan leagues, as well as Lucas Oluoch and Dominic Wesonga, who have represented Kenya in ODIs. When I expressed interest in getting a jersey, Dominic agreed to send me pictures of his jersey. I liked his jersey and collected it from him. I gave him 2000 shillings, an amount suggested by those Indian players.

Me with players at the Nairobi Gymkhana Club

Cricket pitch at the Nairobi Gymkhana Club

A view of the cricket ground inside the Nairobi Gymkhana Club

Scoreboard at the Nairobi Gymkhana cricket ground

Giraffe Center in Nairobi

Kenya is known for its safaris and has no shortage of national parks. In fact, Nairobi is the only capital in the world with a national park. I decided not to visit a national park, as most of them were expensive and offered multi-day tours, and I didn’t want to spend that much time in the wildlife.

Instead, I went to the Giraffe Center in Nairobi with Pragya and Rabina. The ticket cost 1500 Kenyan shillings (1000 Indian rupees). In Kenya, matatus - shared vans, usually decorated with portraits of famous people and play rap songs - are the most popular means of public transport. Reaching the Giraffe Center from our hotel required taking five matatus, which cost a total of 150 shillings, and a 2 km walk. The journey back was 90 shillings, suggesting that we didn’t find the most efficient route to get there. At the Giraffe Center, we fed giraffes and took photos.

A matatu with a Notorious BIG portrait.

Inside the Giraffe Center

Train ride from Nairobi to Mombasa

I took a train from Nairobi to Mombasa. The train is known as the “SGR Train,” where “SGR” refers to “Standard Gauge Railway.” The journey was around 500 km. M-Pesa was the only way to make payment for pre-booking the train ticket, and I didn’t have an M-Pesa account. Pragya’s friend Mary helped facilitate the payment. I booked a second-class ticket, which cost 1500 shillings (1000 Indian rupees).

The train was scheduled to depart from Nairobi at 08:00 hours in the morning and arrive in Mombasa at 14:00 hours. The security check at the station required scanning our bags and having them sniffed by sniffer dogs. I also fell victim to a scam by a security official who offered to help me get my ticket printed, only to later ask me to get him some coffee, which I politely declined.

Before boarding the train, I was treated to some stunning views at the Nairobi Terminus station. It was a seating train, but I wished it were a sleeper train, as I was sleep-deprived. The train was neat and clean, with good toilets. The train reached Mombasa on time at around 14:00 hours.

SGR train at Nairobi Terminus.

Interior of the SGR train

Arrival in Mombasa

Mombasa Terminus station.

Mombasa was a bit hotter than Nairobi, with temperatures reaching around 30 degrees Celsius. However, that’s not too hot for me, as I am used to higher temperatures in India. I had booked a hostel in the Old Town and was searching for a hitchhike from the Mombasa Terminus station. After trying for more than half an hour, I took a matatu that dropped me 3 km from my hostel for 200 shillings (140 Indian rupees). I tried to hitchhike again but couldn’t find a ride.

I think I know why I couldn’t get a ride in both the cases. In the first case, the Mombasa Terminus was in an isolated place, so most of the vehicles were taxis or matatus while any noncommercial cars were there to pick up friends and family. If the station were in the middle of the city, there would be many more car/truck drivers passing by, thus increasing my possibilities of getting a ride. In the second case, my hostel was at the end of the city, and nobody was going towards that side. In fact, many drivers told me they would love to give me a ride, but they were going in some other direction.

Finally, I took a tuktuk for 70 shillings to reach my hostel, Tulia Backpackers. It was 11 USD (1400 shillings) for one night. The balcony gave a nice view of the Indian Ocean. The rooms had fans, but there was no air conditioning. Each bed also had mosquito nets. The place was walking distance of the famous Fort Jesus. Mombasa has had more Islamic influence compared to Nairobi and also has many Hindu temples.

The balcony at Tulia Backpackers Hostel had a nice view of the ocean.

A room inside the hostel with fans and mosquito nets on the beds

Visiting White Sandy Beaches and Getting a Hitchhike

Visiting Nyali beach marked my first time ever at a white sand beach. It was like 10 km from the hostel. The next day, I visited Diani Beach, which was 30 km from the hostel. Going to Diani Beach required crossing a river, for which there’s a free ferry service every few minutes, followed by taking a matatu to Ukunda and then a tuk-tuk to Diani Beach. This gave me an opportunity to see the beautiful countryside during the ride.

Nyali beach is a white sand beach

This is the ferry service for crossing the river.

During my return from Diani Beach to the hostel, I was successful in hitchhiking. However, it was only a 4 km ride and not sufficient to reach Ukunda, so I tried to get another ride. When a truck stopped for me, I asked for a ride to Ukunda. Later, I learned that they were going in the same direction as me, so I got off within walking distance from my hostel. The ride was around 30 km. I also learned the difference between a truck ride and a matatu or car ride. For instance, matatus and cars are much faster and cooler due to air conditioning, while trucks tend to be warmer because they lack it. Further, the truck was stopped at many checkpoints by the police for inspections as it carried goods, which is not the case with matatus. Anyways, it was a nice experience, and I am grateful for the ride. I had a nice conversation with the truck drivers about Indian movies and my experiences in Kenya.

Diani beach is a popular beach in Kenya. It is a white sand beach.

Selfie with truck drivers who gave me the free ride

Back to Nairobi

I took the SGR train from Mombasa back to Nairobi. This time I took the night train, which departs at 22:00 hours, reaching Nairobi at around 04:00 in the morning. I could not sleep comfortably since the train only had seater seats.

I had booked the Zarita Hotel in Nairobi and had already confirmed if they allowed early morning check-in. Usually, hotels have a fixed checkout time, say 11:00 in the morning, and you are not allowed to stay beyond that regardless of the time you checked in. But this hotel checked me in for 24 hours. Here, I paid in US dollars, and the cost was 12 USD.

Almost Got Stuck in Kenya

Two days before my scheduled flight from Nairobi back to India, I heard the news that the airports in Kenya were closed due to the strikes. Rabina and Pragya had their flight back to Nepal canceled that day, which left them stuck in Nairobi for two additional days. I called Sahil in India and found out during the conversation that the strike was called off in the evening. It was a big relief for me, and I was fortunate to be able to fly back to India without any changes to my plans.

Newspapers at a stand in Kenya covering news on the airport closure

Experience with locals

I had no problems communicating with Kenyans, as everyone I met knew English to an extent that could easily surpass that of big cities in India. Additionally, I learned a few words from Kenya’s most popular local language, Swahili, such as “Asante,” meaning “thank you,” “Jambo” for “hello,” and “Karibu” for “welcome.” Knowing a few words in the local language went a long way.

I am not sure what’s up with haggling in Kenya. It wasn’t easy to bring the price of souvenirs down. I bought a fridge magnet for 200 shillings, which was the quoted price. On the other hand, it was much easier to bargain with taxis/tuktuks/motorbikes.

I stayed at three hotels/hostels in Kenya. None of them had air conditioners. Two of the places were in Nairobi, and they didn’t even have fans in the rooms, while the one in Mombasa had only fans. All of them had good Wi-Fi, except Tulia where the internet overall was a bit shaky.

My experience with the hotel staff was great. For instance, we requested that the Nairobi Transit Hotel cancel the included breakfast in order to reduce the room costs, but later realized that it was not a good idea. The hotel allowed us to revert and even offered one of our missing breakfasts during dinner.

The staff at Tulia Backpackers in Mombasa facilitated the ticket payment for my train from Mombasa to Nairobi. One of the staff members also gave me a lift to the place where I could catch a matatu to Nyali Beach. They even added an extra tea bag to my tea when I requested it to be stronger.

Food

At the Nairobi Transit Hotel, a Spanish omelet with tea was served for breakfast. I noticed that Spanish omelette appeared on the menus of many restaurants, suggesting that it is popular in Kenya. This was my first time having this dish. The milk tea in Kenya, referred to by locals as “white tea,” is lighter than Indian tea (they don’t put a lot of tea leaves).

Spanish Omelette served in breakfast at Nairobi Transit Hotel

I also sampled ugali with eggs. In Mombasa, I visited an Indian restaurant called New Chetna and had a buffet thali there twice.

Ugali with eggs.

Tips for Exchanging Money

In Kenya, I exchanged my money at forex shops a couple of times. I received good exchange rates for bills larger than 50 USD. For instance, 1 USD on xe.com was 129 shillings, and I got 128.3 shillings per USD (a total of 12,830 shillings) for two 50 USD notes at an exchange in Nairobi, compared to 127 shillings, which was the highest rate at the banks. On the other hand, for each 1 USD note, I would have received an exchange rate of 125 shillings. A passport was the only document required for the exchange, and they also provided a receipt.

A good piece of advice for travelers is to keep 50 USD or larger bills for exchanging into the local currency while saving the smaller US dollar bills for accommodation, as many hotels and hostels accept payment in US dollars.

Missed Malindi and Lamu

There were more places on my to-visit list in Kenya. But I simply didn’t have time to cover them, as I don’t like rushing through places, especially in a foreign country where there is a chance of me underestimating the amount of time it takes during transit. I would have liked to visit at least one of Kilifi, Watamu or Malindi beaches. Further, Lamu seemed like a unique place to visit as it has no cars or motorized transport; the only options for transport are boats and donkeys.

04 November, 2024 07:25PM

Sven Hoexter

Google CloudDNS HTTPS Records with ipv6hint

I naively provisioned an HTTPS record at Google CloudDNS like this via terraform:

resource "google_dns_record_set" "testv6" {
    name         = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type         = "HTTPS"
    ttl          = 3600
    rrdatas      = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:DB8::1\""]
}

This results in a permanent diff because the Google CloudDNS API seems to parse the record content, and stores the ipv6hint expanded (removing the :: notation) and in all lowercase as 2001:db8:0:0:0:0:0:1. Thus to fix the permanent diff we've to use it like this:

resource "google_dns_record_set" "testv6" {
    name = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type = "HTTPS"
    ttl = 3600
    rrdatas = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:db8:0:0:0:0:0:1\""]
}

Guess I should be glad that they already support HTTPS records natively, and not bicker too much about the implementation details.

04 November, 2024 01:11PM

November 03, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Ultimate rules as a service

Since WFDF changed their ultimate rules web site to be less-than-ideal (in the name of putting everything into Wordpress…), I made my own, at urules.org. It was a fun journey; I've never fiddled with PWAs before, and I was a bit surprised how low-level it all was. I assumed that since my page is just a bunch of HTML files and ~100 lines of JS, I could just bundle that up—but no, that is something they expect a framework to do for you.

The only primitive you get is seemingly that you can fire up your own background service worker (JS running in its own, locked-down context) and that gets to peek at every HTTP request done and possibly intercept it. So you can use a Web Cache (seemingly a separate concept from web local storage?), insert stuff into that, and then query it to intercept requests. It doesn't feel very elegant, perhaps?

It is a bit neat that I can use this to make my own bundling, though. All the pages and images (painfully converted to SVG to save space and re-flow for mobile screens, mostly by simply drawing over bitmaps by hand in Inkscape) are stuck into a JSON dictionary, compressed using the slowest compressor I could find and then downloaded as a single 159 kB bundle. It makes the site actually sort of weird to navigate; since it pretty quickly downloads the bundle in the background, everything goes offline and the speed of loading new pages just feels… off somehow. As if it's not a Serious Web Page if there's no load time.

Of course, this also means that I couldn't cache PNGs, because have you ever tried to have non-UTF-8 data in a JSON sent through N layers of JavaScript? :-)

03 November, 2024 10:48AM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities October 2024

Another short status update of what happened on my side last month. Besides a phosh bugfix release improving text input and selection was a prevalent pattern again resulting in improvements in the compositor, the OSK and some apps.

phosh

  • Install gir (MR). Needed for e.g. Debian to properly package the Rust bindings.
  • Try harder to find an app icon when showing notifications (MR)
  • Add a simple Pomodoro timer plugin (MR)
  • Small screenshot manager fixes (MR)
  • Tweak portals configuration (MR)
  • Consistent focus style on lock screen and settings (MR). Improves the visual appearance as the dotted focus frame doesn't match our otherwise colored focus frames
  • Don't focus buttons in settings (MR). Improves the visual appearance as attention isn't drawn to the button focus.
  • Close Phosh's settings when activating a Settings panel (MR)

phoc

  • Improve cursor and cursor theme handling, hide mouse pointer by default (MR)
  • Don't submit empty preedit (MR)
  • Fix flickering selection bubbles in GTK4's text input fields (MR)
  • Backport two more fixes and release 0.41.1 (MR)

phosh-mobile-settings

  • Allow to select default text completer (MR, MR)
  • Don't crash when we fail to load a pref plugin (MR)

libphosh-rs

  • Update with current gir and allow to use status pages (MR)
  • Expose screenshot manager and build without warnings (MR). (Improved further by a follow up MR from Sam)
  • Fix clippy warnings and add clippy to CI (MR)

phosh-osk-stub

  • presage: Always set predictors (MR). Avoids surprises with unwanted predictors.
  • Install completer information (MR)
  • Handle overlapping touch events (MR). This should improve fast typing.
  • Allow plain ctrl and alt in the shortcuts bar (MR
  • Use Adwaita background color to make the OSK look more integrated (MR)
  • Use StyleManager to support accent colors (MR)
  • Fix emoji section selection in RTL locales (MR)
  • Don't submit empty preedit (MR). Helps to better preserve text selections.

phosh-osk-data

  • Add scripts to build word corpus from Wikipedia data (MR) See here for the data.

xdg-desktop-portal-phosh

  • Release 0.42~rc1 (MR)
  • Fix HighContrast (MR)

Debian

  • Collect some of the QCom workarounds in a package (MR). This is not meant to go into Debian proper but it's nicer than doing all the mods by hand and forgetting which files were modified.
  • q6voiced: Fix service configuration (MR)
  • chatty: Enable clock test again (MR), and then unbreak translations (MR)
  • phosh: Ship gir for libphosh-rs (MR)
  • phoc: Backport input method related fix (MR)
  • Upload initial package of phosh-osk-data: Status in NEW
  • Upload initial package of xdg-desktop-portal-pohsh: Status in NEW
  • Backport phosh-osk-stub abbrev fix (MR
  • phoc: Update to 0.42.1 (MR
  • mobile-tweaks: Enable zram on Librem 5 and PP (MR)

ModemManager

  • Some further work on the Cell Broadcast to address comments MR)

Calls

  • Further improve daemon mode (MR) (mentioned last month already but got even simpler)

GTK

  • Handle Gtk{H,V}Separator when migrating UI files to GTK4 (MR)

feedbackd

  • Modernize README a bit (MR)

Chatty

  • Use special event for SMS (MR)
  • Another QoL fix when using OSK (MR)
  • Fix printing time diffs on 32bit architectures (MR)

libcmatrix

  • Use endpoints for authenticated media (MR). Needed to support v1.11 servers.

phosh-ev

  • Switch to GNOME 47 runtime (MR)

git-buildpackage

  • Don't use deprecated pkg-resources (MR)

Unified push specification

  • Expand on DBus activation a bit (MR)

swipeGuess

  • Small build improvement and mention phosh-osk-stub (Commit)

wlr-clients

  • Fix -o option and add help output (MR)

iotas (Note taking app)

  • Don't take focus with header bar buttons (MR). Makes typing faster (as the OSK won't hide) and thus using the header bar easier

Flare (Signal app)

  • Don't take focus when sending messages, adding emojis or attachments (MR). Makes typing faster (as the OSK won't hide) and thus using those buttons easier

xdg-desktop-portal

  • Use categories that work for both xdg-spec and the portal (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is fairly incomplete, hope to improve on this in the upcoming months:

  • phosh-tour: add first login mode (MR)
  • phosh: Animate swipe closing notifications (MR)
  • iio-sensor-proxy: Report correct value on claim (MR)
  • iio-sensor-proxy: face-{up,down} (MR)
  • phosh-mobile-settings: Squeekboad scaling (MR)
  • libcmatrix: Misc cleanups/fixes (MR)
  • phosh: Notification separator improvements (MR
  • phosh: Accent colors (MR

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

03 November, 2024 10:17AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Doing more swimming in everyday life for the past few months.

Doing more swimming in everyday life for the past few months. Seems like I am keeping that up.

03 November, 2024 09:24AM by Junichi Uekawa

November 02, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

R*****p 1.0.13-1 on CRAN: Hot Fix

r*****p logo

A hot-fix release 1.0.13-1, consisting of two small PRs relative to the last regular CRAN release 1.0.13, just arrived on CRAN. When we prepared 1.0.13, we included a change related to the ‘tightening’ of the C API of R itself. Sadly, we pinned an expected change to ‘comes with next (minor) release 4.4.2’ rather than now ‘next (normal aka major) release 4.5.0’. And now that R 4.4.2 is out (as of two days ago) we accidentally broke building against the header file with that check. Whoops. Bugs happen, and we are truly sorry—but this is now addressed in 1.0.13-1.

The normal (bi-annual) release cycle will resume with 1.0.14 slated for January. As you can see from the NEWS file of the development branch, we have a number of changes coming. You can safely access that release candidate version, either off the default branch at github or via r-universe artifacts.

The list below details all changes, as usual. The only other change concerns the now-mandatory use of Authors@R.

Changes in R*****p release version 1.0.13-1 (2024-11-01)

  • Changes in R*****p API:

    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)
  • Changes in R*****p Deployment:

    • Authors@R is now used in DESCRIPTION as mandated by CRAN

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the r*****p-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 November, 2024 09:13PM

Russell Coker

More About the Yoga Gen3

Two months ago I bought a Thinkpad X1 Yoga Gen3 [1]. I’m still very happy with it, the screen is a great improvement over the FullHD screen on my previous Thinkpad. I have yet to discover what’s the best resolution to have on a laptop if price isn’t an issue, but it’s at least 1440p for a 14″ display, that’s 210DPI. The latest Thinkpad X1 Yoga is the 7th gen and has up to 3840*2400 resolution on the internal display for 323DPI. Apple apparently uses the term “Retina Display” to mean something in the range of 250DPI to 300DPI, so my current laptop is below “Retina” while the most expensive new Thinkpads are above it.

I did some tests on external displays and found that this Thinkpad along with a Dell Latitude of the same form factor and about the same age can only handle one 4K display on a Thunderbolt dock and one on HDMI. On Reddit u/Carlioso1234 pointed out this specs page which says it supports a maximum of 3 displays including the built in TFT [2]. The Thunderbolt/USB-C connection has a maximum resolution of 5120*2880 and the HDMI port has a maximum of 4K. The latest Yoga can support four displays total which means 2*5K over Thunderbolt and one 4K over HDMI. It would be nice if someone made a 8000*2880 ultrawide display that looked like 2*5K displays when connected via Thunderbolt. It would also be nice if someone made a 32″ 5K display, currently they all seem to be 27″ and I’ve found that even for 4K resolution 32″ is better than 27″.

With the typical configuration of Linux and the BIOS the Yoga Gen3 will have it’s touch screen stop working after suspend. I have confirmed this for stylus use but as the finger-touch functionality is broken I couldn’t confirm that. On r/thinkpad u/p9k told me how to fix this problem [3]. I had to set the BIOS to Win 10 Sleep aka Hybrid sleep and then put the following in /etc/systemd/system/thinkpad-wakeup-config.service :

# https://www.reddit.com/r/thinkpad/comments/1blpy20/comment/kw7se2l/?context=3

[Unit]
Description=Workarounds for sleep wakeup source for Thinkpad X1 Yoga 3
After=sysinit.target
After=systemd-modules-load.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio0/power/wakeup"
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio1/power/wakeup"
ExecStart=/bin/sh -c "echo 'LID' > /proc/a*****i/wakeup"

[Install]
WantedBy=multi-user.target

Now it works fine, for stylus at least. I still get kernel error messages like the following which don’t seem to cause problems:

wacom 0003:056A:5146.0005: wacom_idleprox_timeout: tool appears to be hung in-prox. forcing it out.

When it wasn’t working I got the above but also kernel error messages like:

wacom 0003:056A:5146.0005: wacom_wac_queue_insert: kfifo has filled, starting to drop events

This change affected the way suspend etc operate. Now when I connect the laptop to power it will leave suspend mode. I’ve configured KDE to suspend when the lid is closed and there’s no monitor connected.

02 November, 2024 08:05AM by etbe

Moving Between Devices

I previously wrote about the possibility of transferring work between devices as an alternative to “convergence” (using a phone or tablet as a desktop) [1]. This idea has been implemented in some commercial products already.

MrWhosTheBoss made a good YouTube video reviewing recent Huawei products [2]. At 2:50 in that video he shows how you can link a phone and tablet, control one from the other, drag and drop of running apps and files between phone and tablet, mirror the screen between devices, etc. He describes playing a video on one device and having it appear on the other, I hope that it actually launches a new instance of the player app as the Google Chromecast failed in the market due to remote display being laggy. At 7:30 in that video he starts talking about the features that are available when you have multiple Huawei devices, starting with the ability to move a Bluetooth pairing for earphones to a different device.

At 16:25 he shows what Huawei is doing to get apps going including allowing apk files to be downloaded and creating what they call “Quick Apps” which are instances of a web browser configured to just use one web site and make it look like a discrete app, we need something like this for FOSS phone distributions – does anyone know of a browser that’s good for it?

Another thing that we need is to have an easy way of transferring open web pages between systems. Chrome allows sending pages between systems but it’s proprietary, limited to Chrome only, and also takes an unreasonable amount of time. KDEConnect allows sharing clipboard contents which can be used to send URLs that can then be pasted into a browser, but the process of copy URL, send via KDEConnect, and paste into other device is unreasonably slow. The design of Chrome with a “Send to your devices” menu option from the tab bar is OK. But ideally we need a “Send to device” for all tabs of a window as well, we need it to run from free software and support using your own server not someone else’s server (AKA “the cloud”). Some of the KDEConnect functionality but using a server rather than direct connection over the same Wifi network (or LAN if bridged to Wifi) would be good.

What else do we need?

02 November, 2024 08:03AM by etbe

What is a Workstation?

I recently had someone describe a Mac Mini as a “workstation”, which I strongly disagree with. The Wikipedia page for Workstation [1] says that it’s a type of computer designed for scientific or technical use, for a single user, and would commonly run a multi-user OS.

The Mac Mini runs a multi-user OS and is designed for a single user. The issue is whether it is for “scientific or technical use”. A Mac Mini is a nice little graphical system which could be used for CAD and other engineering work. But I believe that the low capabilities of the system and lack of expansion options make it less of a workstation.

The latest versions of the Mac Mini (to be officially launched next week) have up to 64G of RAM and up to 8T of storage. That is quite decent compute power for a small device. For comparison the HP ML 110 Gen9 workstation I’m currently using was released in 2021 and has 256G of RAM and has 4 * 3.5″ SAS bays so I could easily put a few 4TB NVMe devices and some hard drives larger than 10TB. The HP Z640 workstation I have was released in 2014 and has 128G of RAM and 4*2.5″ SATA drive bays and 2*3.5″ SATA drive bays. Previously I had a Dell PowerEdge T320 which was released in 2012 and had 96G of RAM and 8*3.5″ SAS bays.

In *****U and GPU power the recent Mac Minis will compare well to my latest workstations. But they compare poorly to workstations from as much as 12 years ago for RAM and storage. Which is more important depends on the task, if you have to do calculations on 80G of data with lots of scans through the entire data set then a system with 64G of RAM will perform very poorly and a system with 96G and a *****U less than half as fast will perform better. A Dell PowerEdge T320 from 2012 fully loaded with 192G of RAM will outperform a modern Mac Mini on many tasks due to this and the T420 supported up to 384G.

Another issue is generic expansion options. I expect a workstation to have a number of PCIe slots free for GPUs and other devices. The T320 I used to use had a PCIe power cable for a power hungry GPU and I think all the T320 and T420 models with high power PSUs supported that.

I think that a usable definition of a “workstation” is a system having a feature set that is typical of servers (ECC RAM, lots of storage for RAID, maybe hot-swap storage devices, maybe redundant PSUs, and lots of expansion options) while also being suitable for running on a desktop or under a desk. The Mac Mini is nice for running on a desk but that’s the only workstation criteria it fits. I think that ECC RAM should be a mandatory criteria and any system without it isn’t a workstation. That excludes most Apple hardware. The Mac Mini is more of a thin-client than a workstation.

My main workstation with ECC RAM could run 3 VMs that each have more RAM than the largest Mac Mini that will be sold next week.

If 32G of non-ECC RAM is considered enough for a “workstation” then you could get an Android phone that counts as a workstation – and it will probably cost less than a Mac Mini.

02 November, 2024 05:03AM by etbe

November 01, 2024

hackergotchi for Colin Watson

Colin Watson

Free software activity in October 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Ansible

I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn’t make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone.

The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process:

This should now get back into testing tomorrow.

OpenSSH

Martin-Éric Racine reported that ssh-audit didn’t list the ext-info-s feature as being available in Debian’s OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page.

I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn’t regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct.

On upstream’s advice, I cherry-picked some key exchange fixes needed for big-endian architectures.

Python team

I packaged python-evalidate, needed for a new upstream version of buildbot.

The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface.

A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python’s “dead batteries” PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian’s python-wadllib source package to allow its tests to pass. I’ll come back and fix this properly once we sort out the multipart vs. python-multipart packaging.

tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left.

I tracked down an nltk regression that caused build failures in many other packages.

I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernooij, but it needed a little extra work).

I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto).

I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools.

I fixed broken symlinks in python-treq.

I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream).

I upgraded astroid, asyn*****g (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions.

I tried to fix a regression in python-scruffy, but I need testing feedback.

I requested removal of python-testing.mysqld.

01 November, 2024 12:19PM by Colin Watson

Russ Allbery

Review: Overdue and Returns

Review: Overdue and Returns, by Mark Lawrence

Publisher: Mark Lawrence
Copyright: June 2023
Copyright: February 2024
ASIN: B0C9N51M6Y
ASIN: B0CTYNQGBX
Format: Kindle
Pages: 99

Overdue is a stand-alone novelette in the Library Trilogy universe. Returns is a collection of two stories, the novelette "Returns" and the short story "About Pain." All of them together are about the length of a novella, so I'm combining them into a single review.

These are ancillary stories in the same universe as the novels, but not necessarily in the same timeline. (Trying to fit "About Pain" into the novel timeline will give you a headache and I am choosing to read it as author's fan fiction.) I'm guessing they're part of the new fad for releasing short fiction on Amazon to tide readers over and maintain interest between books in a series, a fad about which I have mixed feelings. Given the total lack of publisher metadata in either the stories or on Amazon, I'm assuming they were self-published even though the novels are published by Ace, but I don't know that for certain.

There are spoilers for The Book That Wouldn't Burn, so don't read these before that novel. There are no spoilers for The Book That Broke the World, and I don't think the reading order would matter.

I found all three of these stories irritating and thuddingly trite. "Returns" is probably the best of the lot in terms of quality of storytelling, but I intensely dislike the structural implications of the nature of the book at its center and am therefore hoping that it's non-canonical.

I would not waste your time with these even if you are enjoying the novels.

"Overdue": Three owners of the same bookstore at different points in time have encounters with an albino man named Yute who is on a quest. One of the owners is trying to write a book, one of them is older, depressed, and closed off, and one of them has regular conversations with her sister's ghost. The nature of the relationship between the three is too much of a spoiler, but it involves similar shenanigans as The Book That Wouldn't Burn.

Lawrence uses my least favorite resolution of benign ghost stories. The story tries very hard to sell it as a good thing, but I thought it was cruel and prefer fantasy that rejects both branches of that dilemma. Other than that, it was fine, I guess, although the moral was delivered with all of the subtlety of the last two minutes of a Saturday morning cartoon. (5)

"Returns": Livira returns a book deep inside the library and finds that she can decipher it, which leads her to a story about Yute going on a trip to recover another library book. This had a lot of great Yute lines, plus I always like seeing Livira in exploration mode. The book itself is paradoxical in a causality-destroying way, which is handwaved away as literal magic. I liked this one the best of the three stories, but I hope the world-building of the main series does not go in this direction and I'm a little afraid it might. (6)

"About Pain": A man named Holden runs into a woman named Clovis at the gym while carrying a book titled Catcher that his dog found and that he's returning to the library. I thoroughly enjoy Clovis and was happy to read a few more scenes about her. Other than that, this was fine, I guess, although it is a story designed to deliver a point and that point is one that appears in every discussion of classics and re-reading that has ever happened on the Internet. Also, I know I'm being grumpy, but Lawrence's puns with authors and character names are chapter-epigraph amusing but not short-story-length funny. Yes, yes, his name is Holden, we get it. (5)

Rating: 5 out of 10

01 November, 2024 04:11AM

Paul Wise

FLOSS Activities October 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

All work was done on a volunteer basis.

01 November, 2024 12:57AM

Taavi Väänänen

Custom domains on the Wikimedia Cloud VPS web proxy

The shared web proxy used on Wikimedia Cloud VPS now has technical support for using arbitrary domains (and not just wmcloud.org subdomains) in proxy names. I think this is a good example of how software slowly evolves over time as new requirements emerge, with each new addition building on top of the previous ones.

According to the edit history on Wikitech, the web proxy service has its origins in 2012, although the current idea where you create a proxy and map it to a specific instance and port was only introduced a year later. (Before that, it just directly mapped the subdomain to the VPS instance with the same name).

There were some smaller changes in the coming years like the migration to acme-chief for TLS certificate management, but the overall logic stayed very similar until 2020 when the wmcloud.org domain was introduced. That was implemented by adding a config option listing all possible domains, so future domain additions would be as simple as adding the new domain to that list in the configuration.

Then the changes start becoming more frequent:

  • In 2022, for my Terraform support project, a bunch of logic, including the list of supported backend domains was moved from the frontend code to the backend. This also made it possible to dynamically change which projects can use which domains suffixes for their proxies.
  • Then, early this year, I added support for zones restricted to a single project, because we wanted to use the proxy for the *.svc.toolforge.org Toolforge infrastructure domains instead of coming up with a new system for that use case. This also added suport for using different TLS certificates for different domains so that we would not have to have a single giant certificate with all the names.
  • Finally, the last step was to add two new features to the proxy system: support for adding a proxy at the apex of a domain, as well as support for domains that are not managed in Designate (the Cloud VPS/OpenStack auth DNS service). In addition, we needed a bit of config to ensure http-01 challenges get routed to the acme-chief instance.

01 November, 2024 12:00AM by Taavi Väänänen ([email protected])

October 31, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Do you have a minute..?

Do you have a minute...?

…to talk about the so-called “Intellectual Property”?

31 October, 2024 10:07PM

October 30, 2024

Russell Coker

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

gcbd 0.2.7 on CRAN: More Mere Maintenance

Another pure maintenance release 0.2.7 of the gcbd package is now on CRAN. The gcbd proposes a benchmarking framework for LAPACK and BLAS operations (as the library can exchanged in a plug-and-play sense on suitable OSs) and records result in local database. Its original motivation was to also compare to GPU-based operations. However, as it is both challenging to keep CUDA working packages on CRAN providing the basic functionality appear to come and go so testing the GPU feature can be challenging. The main point of gcbd is now to actually demonstrate that ‘yes indeed’ we can just swap BLAS/LAPACK libraries without any change to R, or R packages. The ‘configure / rebuild R for xyz’ often seen with ‘xyz’ being Goto or MKL is simply plain wrong: you really can just swap them (on proper operating systems, and R configs – see the package vignette for more). But nomatter how often we aim to correct this record, it invariably raises its head another time.

This release accommodates a CRAN change request as we were referencing the (now only suggested) package gputools. As hinted in the previous paragraph, it was once on CRAN but is not right now so we adjusted our reference.

CRANberries also provides a diffstat report for the latest release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 October, 2024 01:10AM

October 28, 2024

Sven Hoexter

GKE version 1.31.1-gke.1678000+ is a baddy

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.

Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe (or health check) are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:

- patch: |-
    $patch: delete
    apiVersion: "networking.k8s.io/v1"
    kind: NetworkPolicy
    metadata:
        name: dummy
  target:
    kind: NetworkPolicy

We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

28 October, 2024 04:43PM

hackergotchi for Thomas Lange

Thomas Lange

30.000 FAIme jobs created in 7 years

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs

3%     cloud image
11%     live ISO
86%     install ISO

Distribution

2%     bullseye
8%     trixie
12%     ubuntu 24.04
78%     bookworm

Misc

  • 18%   used a custom postinst script
  • 11%   provided their ssh pub key for passwordless root login
  • 50%   of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
  • The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.

Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type     Avg     Max
install no desktop     1 min     2 min
install GNOME     2 min     5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type     Avg     Max
live no desktop     4 min     6 min
live GNOME     8 min     11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

28 October, 2024 11:57AM

October 27, 2024

Enrico Zini

Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.

Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:

from unittest import mock

default = 42


def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works nicely as expected:

$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None

It lacks functools.wraps and typing, though. Let's add them.

Adding functools.wraps

Adding a simple @functools.wraps, mock unexpectedly stops working:

# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'

This is the new code, with explanations and a fix:

# Introduce functools
import functools
from unittest import mock

default = 42


def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    # Fix:
    # del wrapped.__wrapped__

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()

Adding typing

For simplicity, from now on let's change Fiddle.print to match its wrapped signature:

      # Give up with making value not optional, to simplify things :(
      def print(self, value: int | None = None) -> None:
          assert value is not None
          print("Answer:", value)

Typing with ParamSpec

# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock

default = 42

P = ParamSpec("P")


def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)

mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:

test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int | None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int | None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int | None, 'value')], None]"

Typing with Callable

We can use explicit Callable argument lists:

# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock

default = 42

A = TypeVar("A")


# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int | None], None]) -> Callable[[A, int | None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:

test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]

typing's documentation says:

Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:

Let's do that!

Typing with Protocol, take 1

# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)


class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int | None = None) -> None:
        ...


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

New mypy complaints:

test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]

What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly.

Typing with Protocol, take 2

So... we add the function descriptor methos to our Protocol!

A lot of this is taken from this discussion.

# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)

# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040


class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""

    def __call__(_, value: int | None = None) -> None:
        """Bound signature."""


class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""

    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)

    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A] | None = None  # noqa: F841
    ) -> BoundPrinter:
        ...

    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A] | None = None  # noqa: F841
    ) -> "Printer[A]":
        ...

    def __get__(
        self, obj: A | None, objtype: type[A] | None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""

    def __call__(_, self: A, value: int | None = None) -> None:
        """Unbound signature."""


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works! It's typed! And mypy is happy!

27 October, 2024 03:46PM

October 26, 2024

hackergotchi for Steve McIntyre

Steve McIntyre

Mini-Debconf in Cambridge, October 10-13 2024

Group photo

Again this year, Arm offered to host us for a mini-debconf in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm campus, where they made us really welcome. They even had some Debian-themed treats made to spoil us!

Cakes

Hacking together

minicamp

For the first two days, we had a "mini-debcamp" with disparate group of people working on all sorts of things: Arm support, live images, browser stuff, package uploads, etc. And (as is traditional) lots of people doing last-minute work to prepare slides for their talks.

Sessions and talks

Secure Boot talk

Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a typical range of Debian subjects: a DPL "Bits" talk, an update from the Release Team, live images. We also had some wider topics: handling your own data, what to look for in the upcoming Post-Quantum Crypto world, and even me talking about the ups and downs of Secure Boot. Plus a random set of lightning talks too! :-)

Video team awesomeness

Video team in action

Lots of volunteers from the DebConf video team were on hand too (both on-site and remotely!), so our talks were both streamed live and recorded for posterity - see the links from the individual talk pages in the wiki, or https://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/ for the full set if you'd like to see more.

A great time for all

Again, the mini-conf went well and feedback from attendees was very positive. Thanks to all our helpers, and of course to our sponsor: Arm for providing the venue and infrastructure for the event, and all the food and drink too!

Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!

26 October, 2024 08:54PM

Russell Coker

The CUPS Vulnerability

The Announcement

Late last month there was an announcement of a “severity 9.9 vulnerability” allowing remote code execution that affects “all GNU/Linux systems (plus others)” [1]. For something to affect all Linux systems that would have to be either a kernel issue or a sshd issue. The announcement included complaints about the lack of response of vendors and “And YES: I LOVE hyping the sh1t out of this stuff because apparently sensationalism is the only language that forces these people to fix”.

He seems to have a different experience to me of reporting bugs, I have had plenty of success getting bugs fixed without hyping them. I just report the bug, wait a while, and it gets fixed. I have reported potential security bugs without even bothering to try and prove that they were exploitable (any situation where you can make a program crash is potentially exploitable), I just report it and it gets fixed. I was very dubious about his ability to determine how serious a bug is and to accurately report it so this wasn’t a situation where I was waiting for it to be disclosed to discover if it affected me. I was quite confident that my systems wouldn’t be at any risk.

Analysis

Not All Linux Systems Run CUPS

When it was published my opinion was proven to be correct, it turned out to be a series of CUPS bugs [2]. To describe that as “all GNU/Linux systems (plus others)” seems like a vast overstatement, maybe a good thing to say if you want to be a TikTok influencer but not if you want to be known for computer security work.

For the Debian distribution the cups-browsed package (which seems to be the main exploitable one) is recommended by cups-daemon, as I have my Debian systems configured to not install recommended packages by default that means that it wasn’t installed on any of my systems. Also the vast majority of my systems don’t do printing and therefore don’t have any part of CUPS installed.

CUPS vs NAT

The next issue is that in Australia most home ISPs don’t have IPv6 enabled and CUPS doesn’t do the things needed to allow receiving connections from the outside world via NAT with IPv4. If inbound port 631 is blocked on both T***** and USP as is the default on Australian home Internet or if there is a correctly configured firewall in place then the network is safe from attack. There is a feature called uPnP port forwarding [3] to allow server programs to ask a router to send inbound connections to them, this is apparently usually turned off by default in router configuration. If it is enabled then there are Debian packages of software to manage this, the miniupnpc package has the client (which can request NAT changes on the router) [4]. That package is not installed on any of my systems and for my home network I don’t use a router that runs uPnP.

The only program I knowingly run that uses uPnP is Warzone2100 and as I don’t play network games that doesn’t happen. Also as an aside in version 4.4.2-1 of warzone2100 in Debian and Ubuntu I made it use Bubblewrap to run the game in a container. So a Remote Code Execution bug in Warzone 2100 won’t be an immediate win for an attacker (exploits via X11 or Wayland are another issue).

MAC Systems

Debian has had AppArmor enabled by default since Buster was released in 2019 [5]. There are claims that AppArmor will stop this exploit from doing anything bad.

To check SE Linux access I first use the “semanage fcontext” command to check the context of the binary, cupsd_exec_t means that the daemon runs as cupsd_t. Then I checked what file access is granted with the sesearch program, mostly just access to temporary files, cupsd config files, the faillog, the Kerberos cache files (not used on the Kerberos client systems I run), Samba run files (might be a possibility of exploiting something there), and the security_t used for interfacing with kernel security infrastructure. I then checked the access to the security class and found that it is permitted to check contexts and access-vectors – not access that can be harmful.

The next test was to use sesearch to discover what capabilities are granted, which unfortunately includes the sys_admin capability, that is a capability that allows many sysadmin tasks that could be harmful (I just checked the Fedora source and Fedora 42 has the same access). Whether the sys_admin capability can be used to do bad things with the limited access cupsd_t has to device nodes etc is not clear. But this access is undesirable.

So the SE Linux policy in Debian and Fedora will stop cupsd_t from writing SETUID programs that can be used by random users for root access and stop it from writing to /etc/shadow etc. But the sys_admin capability might allow it to do hostile things and I have already uploaded a changed policy to Debian/Unstable to remove that. The sys_rawio capability also looked concerning but it’s apparently needed to probe for USB printers and as the domain has no access to block devices it is otherwise harmless. Below are the commands I used to discover what the policy allows and the output from them.

# semanage fcontext -l|grep bin/cups-browsed
/usr/bin/cups-browsed                              regular file       system_u:object_r:cupsd_exec_t:s0 
# sesearch -A -s cupsd_t -c file -p write
allow cupsd_t cupsd_interface_t:file { append create execute execute_no_trans getattr ioctl link lock map open read rename setattr unlink write };
allow cupsd_t cupsd_lock_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_log_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_runtime_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_rw_etc_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_tmp_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t faillog_t:file { append getattr ioctl lock open read write };
allow cupsd_t init_tmpfs_t:file { append getattr ioctl lock read write };
allow cupsd_t krb5_host_rcache_t:file { append create getattr ioctl link lock open read rename setattr unlink write }; [ allow_kerberos ]:True
allow cupsd_t print_spool_t:file { append create getattr ioctl link lock open read relabelfrom relabelto rename setattr unlink write };
allow cupsd_t samba_var_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write }; [ allow_kerberos ]:True
allow cupsd_t usbfs_t:file { append getattr ioctl lock open read write };
# sesearch -A -s cupsd_t -c security
allow cupsd_t security_t:security check_context; [ allow_kerberos ]:True
allow cupsd_t security_t:security { check_context compute_av };
# sesearch -A -s cupsd_t -c capability
allow cupsd_t cupsd_t:capability net_bind_service; [ allow_ypbind ]:True
allow cupsd_t cupsd_t:capability { audit_write chown dac_override dac_read_search fowner fsetid ipc_lock kill net_bind_service setgid setuid sys_admin sys_rawio sys_resource sys_tty_config };
# sesearch -A -s cupsd_t -c capability2
allow cupsd_t cupsd_t:capability2 { block_suspend wake_alarm };
# sesearch -A -s cupsd_t -c blk_file

Conclusion

This is an example of how not to handle security issues. Some degree of promotion is acceptable but this is very excessive and will result in people not taking security announcements seriously in future. I wonder if this is even a good career move by the researcher in question, will enough people believe that they actually did something good in this that it outweighs the number of people who think it’s misleading at best?

26 October, 2024 06:51AM by etbe

October 25, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Behringer Model-D (synths I didn't buy)

Whilst researching what synth to buy, I learned of the Behringer1 Model-D2: a 2018 clone of the 1970 Moog Minimoog, in a desktop form factor.

Behringer Model-D

Behringer Model-D

In common with the original Minimoog, it's a monophonic analogue synth, featuring three audible oscillators3 , Moog's famous 12-ladder filter and a basic envelope generator. The model-d has lost the keyboard from the original and added some patch points for the different stages, enabling some slight re-routing of the audio components.

1970 Moog Minimoog

1970 Moog Minimoog

Since I was focussing on more fundamental, back-to-basics instruments, this was very appealing to me. I'm very curious to find out what's so compelling about the famous Moog sound. The relative lack of features feels like an advantage: less to master. The additional patch points makes it a little more flexible and offer a potential gateway into the world of modular synthesis. The Model-D is also very affordable: about £ 200 GBP. I'll never own a real Moog.

For this to work, I would need to supplement it with some other equipment. I'd need a keyboard (or press the Micron into service as a controller); I would want some way of recording and overdubbing (same as with any synth). There are no post-mix effects on the Model-D, such as delay, reverb or chorus, so I may also want something to add those.

What stopped me was partly the realisation that there was little chance that a perennial beginner, such as I, could eek anything novel out of a synthesiser design that's 54 years old. Perhaps that shouldn't matter, but it gave me pause. Whilst the Model-D has patch points, I don't have anything to connect to them, and I'm firmly wanting to avoid the Modular Synthesis money pit. The lack of effects, and polyphony could make it hard to live-sculpt a tone.

I started characterizing the Model-D as the "heart" choice, but it seemed wise to instead go for a "head" choice.

Maybe another day!


  1. There's a whole other blog post of material I could write about Behringer and their clones of classic synths, some long out of production, and others, not so much. But, I decided to skip on that for now.
  2. taken from the fact that the Minimoog was a productised version of Moog's fourth internal prototype, the model D.
  3. 2 oscillators is more common in modern synths

25 October, 2024 03:56PM

Reproducible Builds (diffoscope)

diffoscope 282 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 282. This version includes the following changes:

[ Chris Lamb ]
* Ignore errors when listing .ar archives. (Closes: #1085257)
* Update copyright years.

You find out more by visiting the project homepage.

25 October, 2024 12:00AM

October 24, 2024

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

back to blogging and running a feed reader as a containerized systemd service

After reading about Jonathan McDowell feed reader install and the back to blogging initiative, I decided to install a feed reader to follow all those nice blog posts. With a feed reader you can compose your own feed of news based on blog posts, websites, mastodon toots. And then you are independant from ad oriented ranking algorithms of social networks.

Since Jonathan used FreshRSS as a feed reader, I started with the same software. On a quick glance on its github page, it sounded like a good project:

  • active contributions
  • different channels for stable and latest version of the software
  • container images pointing to the stable release
  • support multiple databases for storage, including PostgreSQL
  • correct documentation mentioning security caveats

I prefer to do the container image installation using podman since:

  • upgrades from FreshRSS are easy to do and can be done separately from operating system upgrades
  • I do not mess my based operating system with php (subjective) and in case of a compromized freshrss, the freshrss/apache install would be still restrained to its own Linux namespaces, separated from the rest of the system.

Podman is image compatible with Docker as they both implement the OCI runtime specification, and have a nearly identical command line interface. This installation will be done on a Debian server, but should work too on any Linux distribution.

Initial setup

  • start a container image based on the start command provided by the FreshRSS project. The podman command line is nearly identical to the docker command line, excepts that podman expects the fully qualified domain name associated with the container image, and I chose to run the freshrss container on the localhost interface only. I also use a defined version tag, because using the latest tag makes it complicated to track which exact ersion I have installed.
# podman pull docker.io/freshrss/freshrss:1.20.1
# podman run --detach --restart unless-stopped --log-opt max-size=10m \
  --publish 127.0.0.1:8081:80 \
  --env TZ=Europe/Paris \
  --env 'CRON_MIN=1,31' \
  --volume freshrss_data:/var/www/FreshRSS/data \
  --volume freshrss_extensions:/var/www/FreshRSS/extensions \
  --name freshrss \
  docker.io/freshrss/freshrss:1.20.1
  • verify where the podman volumes have been created. This is where the user data of freshrss will be stored.
# podman volume ls
# podman volume inspect freshrss_data
  • now that freshrss is installed, you can start its configuration wizard at localhost:8081. You should keep the default sqlite choice
  • finally after running the wizard, you can login again and add some feeds
  • verify that your config has been stored outside the container, and inside the volume (so that it will not be erased in case of upgrages)
# ls -l /var/lib/containers/storage/volumes/freshrss_data/_data/users/
  • verify the state of sqlite database
echo '.tables'| sqlite3  /var/lib/containers/storage/volumes/freshrss_data/_data/users/<your freshrss user>/db.sqlite 
category  entry     entrytag  entrytmp  feed      tag

Going with FreshRSS in Production

Podman has this very nice feature that it can generate a systemd unit from a running container, and use systemd to start a container on boot. This is in contrary to docker where the docker daemon does the stop/start of containers on boot. I prefer the systemd approach as it treats containers the same way as other system services.

Once the freshrss container is running we can generate a systemd unit of it with:

# podman generate systemd --new --name freshrss | tee /etc/systemd/system/container-freshrss.service

Let’s stop the container we started previously, and use systemd to manage it:

# podman stop freshrss
# systemctl enable --now container-freshrss.service

We can verify that we have a listening socket on the localhost interface, on the source port 8081

# systemctl status container-freshrss.service
  ...
# ss --listening --numeric --process '( sport = 8081 )'
Netid         State           Recv-Q          Send-Q                   Local Address:Port                   Peer Address:Port         Process         
t*****           LISTEN          0               4096                         127.0.0.1:8081                        0.0.0.0:*             users:(("conmon",pid=4464,fd=5))

Nota Bene: conmon (8) is the process managing the network namespace in which fresh-rss is running, hence it is displayed as the process owning the listening socket

Exposing FreshRSS to the external world

We have now a running service, but we need to make it reachable from the internet. The simplest, classical way, is to create a subdomain and a VirtualHost configured as a reverse proxy to access the service at 127.0.0.1:8081. Fortunately the FreshRSS authors have documented this setup in https://github.com/FreshRSS/FreshRSS/tree/edge/Docker#alternative-reverse-proxy-using-apache and those steps are no different from a standard application behind a web reverse proxy.

Upgrading freshrss container to a newer version

A documentation showing how to install a piece of software is nothing when it does not show how to upgrade that said software. Installing is easy, upgrading is where the challenge is. Fortunately to the good stateless design of freshrss (everything is in the sqlite database, which is backed by a non-epheremal volume in our setup), switchting versions is a peace of cake.

# podman pull docker.io/freshrss/freshrss:1.20.2
# systemctl stop container-freshrss.service
# sed -i 's,docker.io/freshrss/freshrss:1.20.1,docker.io/freshrss/freshrss:1.20.2,' /etc/systemd/system/container-freshrss.service
# systemctl daemon-reload
# systemctl start container-freshrss.service

If you need to rollback, you just need to revert version numbers in the instruction above.

Enjoy your own reader feed !

I will add the following feeds of blogs I like, let us see if I follow them better with a feed reader !

24 October, 2024 07:33PM by Manu

Valhalla's Things

Asemic Writing, a Zine

Posted on October 24, 2024
Tags: madeof:atoms, madeof:bits, craft:zine

An open booklet with lines that look like some kind of cursive non-alphabetic script, framed by a border in the same script and four symbols in the corners.

I have no idea either.

The front of that booklet, with three lines of fake text in different sizes and a circle of the same.

Happy Maladay1 to those who celebrate it, I guess.


A template on white paper with pencil lines where text is supposed to go.

Multiple A4 sheet of tracing paper with fake text, plus an A6 sheet and a white A6 sheet with a stamp impression.

If you care about the how, it started as china ink on tracing paper, with the help of a template (and a correction sheet for one page where I used the wrong line on the template).

alt

A rubber stamp was carved with the author’s signature and stamped on white paper because the ink from the pad wasn’t working well on tracing paper.

Then everything was scanned (with the correction on top of the wrong page) asemic_zine_scans.tar.

Imported in Inkscape and traced asemic_zine_svg.tar.

Printed, cut in half, folded and stapled. The magenta lines weren’t by design, but are there because my printer is currently2 cursed.

And finally, asemic_zine.pdf was created, joining the pages together with pdfjam, for convenience in case somebody wants to download the full thing.

All the .tar and .pdf downloads from this page are released under the WTFPL, or All Rites Reversed..


  1. it’s still technically Maladay when I write this, even if by the time you’ll get this it’s probably the 6th of The Aftermath.↩︎

  2. I mean, all printers are always cursed, but at different times they can be cursed in different and novel ways.↩︎

24 October, 2024 12:00AM

October 23, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Why hardware synths?

Russell wrote a great comment on my last post (thanks!):

What benefits do these things offer when a general purpose computer can do so many things nowadays? Is there a USB keyboard that you can connect to a laptop or phone to do these things? I presume that all recent phones have the compute power to do all the synthesis you need if you have the right software. Is it just a lack of software and infrastructure for doing it on laptops/phones that makes synthesisers still viable?

I've decided to turn my response into a post of its own.

The issue is definitely not compute power. You can indeed attach a USB keyboard to a computer and use a plethora of software synthesisers, including very faithful emulations of all the popular classics. The raw compute power of modern hardware synths is comparatively small: I’ve been told the modern Korg digital synths are on a par with a raspberry pi. I’ve seen some DSPs which are 32 bit ARMs, and other tools which are roughly equivalent to arduinos.

I can think of four reasons hardware synths remain popular with some despite the above:

  1. As I touched on in my original synth post, computing dominates my life outside of music already. I really wanted something separate from that to keep mental distance from work.

  2. Synths have hard real-time requirements. They don't have raw power in compute terms, but they absolutely have to do their job within microseconds of being instructed to, with no exceptions. Linux still has a long way to go for hard real-time.

  3. The Linux audio ecosystem is… complex. Dealing with pipewire, pulseaudio, jack, alsa, oss, and anything else I've forgotten, as well as their failure modes, is too time consuming.

  4. The last point is to do with creativity and inspiration. A good synth is more than the sum of its parts: it's an instrument, carefully designed and its components integrated by musically-minded people who have set out to create something to inspire. There are plenty of synths which aren't good instruments, but have loads of features: they’re boxes of "stuff". Good synths can't do it all: they often have limitations which you have to respond to, work around or with, creatively. This was expressed better than I could by Trent Reznor in the video archetype of a synthesiser:

23 October, 2024 09:51AM

Arturia Microfreak

Arturia Microfreak. [© CC-BY-SA 4](https://commons.wikimedia.org/wiki/File:MicroFreak.jpg)

Arturia Microfreak. © CC-BY-SA 4

I nearly did, but ultimately I didn't buy an Arturia Microfreak.

The Microfreak is a small form factor hybrid synth with a distinctive style. It's priced at the low end of the market and it is overflowing with features. It has a weird 2-octave keyboard which is a stylophone-style capacitive strip rather than weighted keys. It seems to have plenty of controls, but given the amount of features it has, much of that functionality is inevitably buried in menus. The important stuff is front and centre, though. The digital oscillators are routed through an analog filter. The Microfreak gained sampler functionality in a firmware update that surprised and delighted its owners.

I watched a load of videos about the Microfreak, but the above review from musician Stimming stuck in my mind because it made a comparison between the Microfreak and *****age Engineering's OP-1.

The *****age Engineering OP-1.

The *****age Engineering OP-1.

I'd been lusting after the OP-1 since it appeared in 2011: a pocket-sized1 music making machine with eleven synthesis engines, a sampler, and less conventional features such as an FM radio, a large colour OLED display, and a four track recorder. That last feature in particular was really appealing to me: I loved the idea of having an all-in-one machine to try and compose music. Even then, I was not keen on involving conventional computers in music making.

Of course in many ways it is a very compromised machine. I never did buy a OP-1, and by now they've replaced it with a new model (the OP-1 field) that costs 50% more (but doesn't seem to do 50% more) I'm still not buying one.

Framing the Microfreak in terms of the OP-1 made the penny drop for me. The Microfreak doesn't have the four-track functionality, but almost no synth has: I'm going to have to look at something external to provide that. But it might capture a similar sense of fun; it's something I could use on the sofa, in the spare room, on the train, during lunchbreaks at work, etc.

On the other hand, I don't want to make the same mistake as with the Micron: too much functionality requiring some experience to understand what you want so you can go and find it in the menus. I also didn't get a chance to audition the unusual keyboard: there's only one music store carrying synths left in Newcastle and they didn't have one.

So I didn't buy the Microfreak. Maybe one day in the future once I'm further down the road. Instead, I started to concentrate my search on more fundamental, back-to-basics instruments…


  1. Big pockets, mind

23 October, 2024 09:51AM

Michael Ablassmeier

qmpbackup 0.33

In the last weeks qmpbackup has seen a bit more improvements.

  • Adds support for CEPH/RBD backed devices.
  • Allows to use unique bitmaps for having multiple, separate backup chains.
  • Adds support for jsonified filename configurations like often used on proxmox systems.
  • Adds support for saving attached pflash/nvram devices (storing UEFI related settings)
  • qmprestore can now merge the backup chain into a new image file and the new snapshotrebase command can rebase the images and after committing, creates an internal qcow snapshot, so one can easily switch between different vm states in the backup.

Ive been running it lately to backup Virtual machines on proxmox systems, where the proxmox backup server is not an option.

23 October, 2024 12:00AM

October 22, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

drat 0.2.5 on CRAN: Small Updates

drat user

A new minor release of the drat package arrived on CRAN today, which is just over a year since the previous release. drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for over two-and-a-half decades how to do this. And you can too: drat is easy to use, documented by six vignettes and just works. Detailed information about drat is at its documentation site. That said, and ‘these days’, if you mainly care about github code then r-universe is there too, also offering binaries its makes and all that jazz. But sometimes you just want to, or need to, roll a local repository and drat can help you there.

This release contains a small PR (made by Arne Holmin just after the previous release) adding support for an ‘OSflacour’ variable (helpful for macOS). We also corrected an issue with one test file being insufficiently careful of using git2r only when installed, and as usual did a round of maintenance for the package concerning both continuous integration and documentation.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.5 (2024-10-21)

  • Function insertPackage has a new optional argument OSflavour (Arne Holmin in #142)

  • A test file conditions correctly about git2r being present (Dirk)

  • Several smaller packaging updates and enhancements to continuous integration and documentation have been added (Dirk)

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page as well as at the documentation site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 October, 2024 12:38AM

October 21, 2024

Sahil Dhiman

Free Software Mirrors in India

Last Updated on 02/11/2024.

List of public mirrors in India. Location discovered basis personal knowledge, traces or GeoIP. Mirrors which aren’t accessible outside their own ASN are excluded.

North India

East India

South India

West India

CDN (or behind one)

Many thanks to Shrirang and Saswata for tips and corrections. Let me know if I’m missing someone or something is amiss.

21 October, 2024 06:29PM

Sven Hoexter

Terraform: Making Use of Precondition Checks

I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets.

Since the GitHub terraform provider internally works mostly with repository IDs, not slugs (this human readable organization/repo format), we've to do some mapping in between. In my case it looks like this:

#tfvars Input for Module
org_secrets = {
    "SECRET_A" = {
        repos = [
            "infra-foo",
            "infra-baz",
            "deployment-foobar",
        ]
    "SECRET_B" = {
        repos = [
            "job-abc",
            "job-xyz",
        ]
    }
}

# Module Code
/*
Limitation: The GH search API which is queried returns at most 1000
results. Thus whenever we reach that limit this approach will no longer work.
The query is also intentionally limited to internal repositories right now.
*/
data "github_repositories" "repos" {
    query           = "org:myorg archived:false -is:public -is:private"
    include_repo_id = true
}

/*
The properties of the github_repositories.repos data source queried
above contains only lists. Thus we've to manually establish a mapping
between the repository names we need as a lookup key later on, and the
repository id we got in another list from the search query above.
*/
locals {
    # Assemble the set of repository names we need repo_ids for
    repos = toset(flatten([for v in var.org_secrets : v.repos]))

    # Walk through all names in the query result list and check
    # if they're also in our repo set. If yes add the repo name -> id
    # mapping to our resulting map
    repos_and_ids = {
        for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i]
        if contains(local.repos, v)
    }
}

resource "github_actions_organization_secret" "org_secrets" {
    for_each        = var.org_secrets
    secret_name     = each.key
    visibility      = "selected"
    # the logic how the secret value is sourced is omitted here
    plaintext_value = data.xxx
    selected_repository_ids = [
        for r in each.value.repos : local.repos_and_ids[r]
        if can(local.repos_and_ids[r])
    ]
}

Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently.

Luckily terraform supports since version 1.2 precondition checks, which we can use in an output-block to provide the information which repository is missing. What we need is the set of missing repositories and the validation condition:

locals {
    # Debug facility in combination with an output and precondition check
    # There we can report which repository we still have in our configuration
    # but no longer get as a result from the data provider query
    missing_repos = setsubtract(local.repos, data.github_repositories.repos.names)
}

# Debug facility - If we can not find every repository in our
# search query result, report those repos as an error
output "missing_repos" {
    value = local.missing_repos
    precondition {
        condition     = length(local.missing_repos) == 0
        error_message = format("Repos in config missing from resultset: %v", local.missing_repos)
    }
}

Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs, but is not supported by GitHub and you will encounter inconsistent results. But it works, even if your terraform apply failed that way.

21 October, 2024 01:26PM

Russ Allbery

California general election

As usual with these every-two-year posts, probably of direct interest only to California residents. Maybe the more obscure things we're voting on will be a minor curiosity to people elsewhere. I'm a bit late this year, although not as late as last year, so a lot of people may have already voted, but I've been doing this for a while and wanted to keep it up.

This post will only be about the ballot propositions. I don't have anything useful to say about the candidates that isn't hyper-local. I doubt anyone who has read my posts will be surprised by which candidates I'm voting for.

As always with Calfornia ballot propositions, it's worth paying close attention to which propositions were put on the ballot by the legislature, usually because there's some state law requirement (often that I disagree with) that they be voted on by the public, and propositions that were put on the ballot by voter petition. The latter are often poorly written and have hidden problems. As a general rule of thumb, I tend to default to voting against propositions added by petition. This year, one can conveniently distinguish by number: the single-digit propositions were added by the legislature, and the two-digit ones were added by petition.

Proposition 2: YES. Issue $10 billion in bonds for public school infrastructure improvements. I generally vote in favor of spending measures like this unless they have some obvious problem. The opposition argument is a deranged rant against immigrants and government debt and fails to point out actual problems. The opposition argument also claims this will result in higher property taxes and, seriously, if only that were true. That would make me even more strongly in favor of it.

Proposition 3: YES. Enshrines the right to marriage without regard to ***** or race into the California state constitution. This is already the law given US Supreme Court decisions, but fixing California state law is a long-overdue and obvious cleanup step. One of the quixotic things I would do if I were ever in government, which I will never be, would be to try to clean up the laws to make them match reality, repealing all of the dead clauses that were overturned by court decisions or are never enforced. I am in favor of all measures in this direction even when I don't agree with the direction of the change; here, as a bonus, I also strongly agree with the change.

Proposition 4: YES. Issue $10 billion in bonds for infrastructure improvements to mitigate climate risk. This is basically the same argument as Proposition 2. The one drawback of this measure is that it's kind of a mixed grab bag of stuff and probably some of it should be supported out of the general budget rather than bonds, but I consider this a minor problem. We definitely need to ramp up climate risk mitigation efforts.

Proposition 5: YES. Reduces the required super-majority to pass local bond measures for affordable housing from 67% to 55%. The fact that this requires a supermajority at all is absurd, California desperately needs to build more housing of any kind however we can, and publicly funded housing is an excellent idea.

Proposition 6: YES. Eliminates "involuntary servitude" (in other words, "temporary" slavery) as a legally permissible punishment for crimes in the state of California. I'm one of the people who think the 13th Amendment to the US Constitution shouldn't have an exception for punishment for crimes, so obviously I'm in favor of this. This is one very, very tiny step towards improving the absolutely atrocious prison conditions in the state.

Proposition 32: YES. Raises the minimum wage to $18 per hour from the current $16 per hour, over two years, and ties it to inflation. This is one of the rare petition-based propositions that I will vote in favor of because it's very straightforward, we clearly should be raising the minimum wage, and living in California is absurdly expensive because we refuse to build more housing (see Propositions 5 and 33). The opposition argument is the standard lie that a higher minimum wage will increase unemployment, which we know from numerous other natural experiments is simply not true.

Proposition 33: NO. Repeals Costa-Hawkins, which prohibits local municipalities from enacting rent control on properties built after 1995. This one is going to split the progressive vote rather badly, I suspect.

California has a housing crisis caused by not enough housing supply. It is not due to vacant housing, as much as some people would like you to believe that; the numbers just don't add up. There are way more people living here and wanting to live here than there is housing, so we need to build more housing.

Rent control serves a valuable social function of providing stability to people who already have housing, but it doesn't help, and can hurt, the project of meeting actual housing demand. Rent control alone creates a two-tier system where people who have housing are protected but people who don't have housing have an even harder time getting housing than they do today. It's therefore quite consistent with the general NIMBY playbook of trying to protect the people who already have housing by making life harder for the people who do not, while keeping the housing supply essentially static.

I am in favor of rent control in conjunction with real measures to increase the housing supply. I am therefore opposed to this proposition, which allows rent control without any effort to increase housing supply. I am quite certain that, if this passes, some municipalities will use it to make constructing new high-density housing incredibly difficult by requiring it all be rent-controlled low-income housing, thus cutting off the supply of multi-tenant market-rate housing entirely. This is already a common political goal in the part of California where I live. Local neighborhood groups advocate for exactly this routinely in local political fights.

Give me a mandate for new construction that breaks local zoning obstructionism, including new market-rate housing to maintain a healthy lifecycle of housing aging into affordable housing as wealthy people move into new market-rate housing, and I will gladly support rent control measures as part of that package. But rent control on its own just allocates winners and losers without addressing the underlying problem.

Proposition 34: NO. This is an excellent example of why I vote against petition propositions by default. This is a law designed to affect exactly one organization in the state of California: the AIDS Healthcare Foundation. The reason for this targeting is disputed; one side claims it's because of the AHF support for Proposition 33, and another side claims it's because AHF is a slumlord abusing California state funding. I have no idea which side of this is true. I also don't care, because I am fundamentally opposed to writing laws this way. Laws should establish general, fair principles that are broadly applicable, not be written with bizarrely specific conditions (health care providers that operate multifamily housing) that will only be met by a single organization. This kind of nonsense creates bad legal codes and the legal equivalent of technical debt. Just don't do this.

Proposition 35: YES. I am, reluctantly, voting in favor of this even though it is a petition proposition because it looks like a useful simplification and cleanup of state health care funding, makes an expiring tax permanent, and is supported by a very wide range of organizations that I generally trust to know what they're talking about. No opposition argument was filed, which I think is telling.

Proposition 36: NO. I am resigned to voting down attempts to start new "war on drugs" nonsense for the rest of my life because the people who believe in this crap will never, ever, ever stop. This one has bonus shoplifting fear-mongering attached, something that touches on nasty local politics that have included large retail chains manipulating crime report statistics to give the impression that shoplifting is up dramatically. It's yet another round of the truly horrific California "three strikes" criminal penalty obsession, which completely misunderstands both the causes of crime and the (almost nonexistent) effectiveness of harsh punishment as deterrence.

21 October, 2024 12:03AM

October 20, 2024

hackergotchi for Bits from Debian

Bits from Debian

Ada Lovelace Day 2024 - Interview with some Women in Debian

Alt Ada Lovelace portrait

Ada Lovelace Day was celebrated on October 8 in 2024, and on this occasion, to celebrate and raise awareness of the contributions of women to the STEM fields we interviewed some of the women in Debian.

Here we share their thoughts, comments, and concerns with the hope of inspiring more women to become part of the Sciences, and of course, to work inside of Debian.

This article was simulcasted to the debian-women mail list.

Beatrice Torracca

1. Who are you?

I am Beatrice, I am Italian. Internet technology and everything computer-related is just a hobby for me, not my line of work or the subject of my academic studies. I have too many interests and too little time. I would like to do lots of things and at the same time I am too Oblomovian to do any.

2. How did you get introduced to Debian?

As a user I started using newsgroups when I had my first dialup connection and there was always talk about this strange thing called Linux. Since moving from DR DOS to Windows was a shock for me, feeling like I lost the control of my machine, I tried Linux with Debian Potato and I never strayed away from Debian since then for my personal equipment.

3. How long have you been into Debian?

Define "into". As a user... since Potato, too many years to count. As a contributor, a similar amount of time, since early 2000 I think. My first archived email about contributing to the translation of the description of Debian packages dates 2001.

4. Are you using Debian in your daily life? If yes, how?

Yes!! I use testing. I have it on my desktop PC at home and I have it on my laptop. The desktop is where I have a local IMAP server that fetches all the mails of my email accounts, and where I sync and back up all my data. On both I do day-to-day stuff (from email to online banking, from shopping to taxes), all forms of entertainment, a bit of work if I have to work from home (GNU R for statistics, LibreOffice... the usual suspects). At work I am required to have another OS, sadly, but I am working on setting up a Debian Live system to use there too. Plus if at work we start doing bioinformatics there might be a Linux machine in our future... I will of course suggest and hope for a Debian system.

5. Do you have any suggestions to improve women's participation in Debian?

This is a tough one. I am not sure. Maybe, more visibility for the women already in the Debian Project, and make the newcomers feel seen, valued and welcomed. A respectful and safe environment is key too, of course, but I think Debian made huge progress in that aspect with the Code of Conduct. I am a big fan of promoting diversity and inclusion; there is always room for improvement.

Ileana Dumitrescu (ildumi)

1. Who are you?

I am just a girl in the world who likes cats and packaging Free Software.

2. How did you get introduced to Debian?

I was tinkering with a computer running Debian a few years ago, and I decided to learn more about Free Software. After a search or two, I found Debian Women.

3. How long have you been into Debian?

I started looking into contributing to Debian in 2021. After contacting Debian Women, I received a lot of information and helpful advice on different ways I could contribute, and I decided package maintenance was the best fit for me. I eventually became a Debian Maintainer in 2023, and I continue to maintain a few packages in my spare time.

4. Are you using Debian in your daily life? If yes, how?

Yes, it is my favourite GNU/Linux operating system! I use it for email, chatting, browsing, packaging, etc.

5. Do you have any suggestions to improve women's participation in Debian?

The mailing list for Debian Women may attract more participation if it is utilized more. It is where I started, and I imagine participation would increase if it is more engaging.

Kathara Sasikumar (kathara)

1. Who are you?

I'm Kathara Sasikumar, 22 years old and a recent Debian user turned Maintainer from India. I try to become a creative person through sketching or playing guitar chords, but it doesn't work! xD

2. How did you get introduced to Debian?

When I first started college, I was that overly enthusiastic student who signed up for every club and volunteered for anything that crossed my path just like every other fresher.

But then, the pandemic hit, and like many, I hit a low point. COVID depression was real, and I was feeling pretty down. Around this time, the FOSS Club at my college suddenly became more active. My friends, knowing I had a love for free software, pushed me to join the club. They thought it might help me lift my spirits and get out of the slump I was in.

At first, I joined only out of peer pressure, but once I got involved, the club really took off. FOSS Club became more and more active during the pandemic, and I found myself spending more and more time with it.

A year later, we had the opportunity to host a MiniDebConf at our college. Where I got to meet a lot of Debian developers and maintainers, attending their talks and talking with them gave me a wider perspective on Debian, and I loved the Debian philosophy.

At that time, I had been distro hopping but never quite settled down. I occasionally used Debian but never stuck around. However, after the MiniDebConf, I found myself using Debian more consistently, and it truly connected with me. The community was incredibly warm and welcoming, which made all the difference.

3. How long have you been into Debian?

Now, I've been using Debian as my daily driver for about a year.

4. Are you using Debian in your daily life? If yes, how?

It has become my primary distro, and I use it every day for continuous learning and working on various software projects with free and open-source tools. Plus, I've recently become a Debian Maintainer (DM) and have taken on the responsibility of maintaining a few packages. I'm looking forward to contributing more to the Debian community 🙂

Rhonda D'Vine (rhonda)

1. Who are you?

My name is Rhonda, my pronouns are she/her, or per/pers. I'm 51 years old, working in IT.

2. How did you get introduced to Debian?

I was already looking into Linux because of university, first it was SuSE. And people played around with gtk. But when they packaged GNOME and it just didn't even install I looked for alternatives. A working colleague from back then gave me a CD of Debian. Though I couldn't install from it because Slink didn't recognize the pcmcia drive. I had to install it via floppy disks, but apart from that it was quite well done. And the early GNOME was working, so I never looked back. 🙂

3. How long have you been into Debian?

Even before I was more involved, a colleague asked me whether I could help with translating the release documentation. That was my first contribution to Debian, for the slink release in early 1999. And I was using some other software before on my SuSE systems, and I wanted to continue to use them on Debian obviously. So that's how I got involved with packaging in Debian. But I continued to help with translation work, for a long period of time I was almost the only person active for the German part of the website.

4. Are you using Debian in your daily life? If yes, how?

Being involved with Debian was a big part of the reason I got into my jobs since a long time now. I always worked with maintaining Debian (or Ubuntu) systems. Privately I run Debian on my laptop, with occasionally switching to Windows in dual boot when (rarely) needed.

5. Do you have any suggestions to improve women's participation in Debian?

There are factors that we can't influence, like that a lot of women are pushed into care work because patriarchal structures work that way, and don't have the time nor energy to invest a lot into other things. But we could learn to appreciate smaller contributions better, and not focus so much on the quantity of contributions. When we look at longer discussions on mailing lists, those that write more mails actually don't contribute more to the discussion, they often repeat themselves without adding more substance. Through working on our own discussion patterns this could create a more welcoming environment for a lot of people.

Sophie Brun (sophieb)

1. Who are you?

I'm a 44 years old French woman. I'm married and I have 2 sons.

2. How did you get introduced to Debian?

In 2004 my boyfriend (now my husband) installed Debian on my personal computer to introduce me to Debian. I knew almost nothing about Open Source. During my engineering studies, a professor mentioned the existence of Linux, Red Hat in particular, but without giving any details.

I learnt Debian by using and reading (in advance) The Debian Administrator's Handbook.

3. How long have you been into Debian?

I've been a user since 2004. But I only started contributing to Debian in 2015: I had quit my job and I wanted to work on something more meaningful. That's why I joined my husband in Freexian, his company. Unlike most people I think, I started contributing to Debian for my work. I only became a DD in 2021 under gentle social pressure and when I felt confident enough.

4. Are you using Debian in your daily life? If yes, how?

Of course I use Debian in my professional life for almost all the tasks: from administrative tasks to Debian packaging.

I also use Debian in my personal life. I have very basic needs: Firefox, LibreOffice, GnuCash and Rhythmbox are the main applications I need.

Sruthi Chandran (srud)

1. Who are you?

A feminist, a librarian turned Free Software advocate and a Debian Developer. Part of Debian Outreach team and DebConf Committee.

2. How did you get introduced to Debian?

I got introduced to the free software world and Debian through my husband. I attended many Debian events with him. During one such event, out of curiosity, I participated in a Debian packaging workshop. Just after that I visited a Tibetan community in India and they mentioned that there was no proper Tibetan font in GNU/Linux. Tibetan font was my first package in Debian.

3. How long have you been into Debian?

I have been contributing to Debian since 2016 and Debian Developer since 2019.

4. Are you using Debian in your daily life? If yes, how?

I haven't used any other distro on my laptop since I got introduced to Debian.

5. Do you have any suggestions to improve women's participation in Debian?

I was involved with actively mentoring newcomers to Debian since I started contributing myself. I specially work towards reducing the gender gap inside the Debian and Free Software community in general. In my experience, I believe that visibility of already existing women in the community will encourage more women to participate. Also I think we should reintroduce mentoring through debian-women.

Tássia Camões Araújo (tassia)

1. Who are you?

Tássia Camões Araújo, a Brazilian living in Canada. I'm a passionate learner who tries to push myself out of my comfort zone and always find something new to learn. I also love to mentor people on their learning journey. But I don't consider myself a typical geek. My challenge has always been to not get distracted by the next project before I finish the one I have in my hands. That said, I love being part of a community of geeks and feel empowered by it. I love Debian for its technical excellence, and it's always reassuring to know that someone is taking care of the things I don't like or can't do. When I'm not around computers, one of my favorite things is to feel the wind on my cheeks, usually while skating or riding a bike; I also love music, and I'm always singing a melody in my head.

2. How did you get introduced to Debian?

As a student, I was privileged to be introduced to FLOSS at the same time I was introduced to computer programming. My university could not afford to have labs in the usual proprietary software model, and what seemed like a limitation at the time turned out to be a great learning opportunity for me and my colleagues. I joined this student-led initiative to "liberate" our servers and build LTSP-based labs - where a single powerful computer could power a few dozen diskless thin clients. How revolutionary it was at the time! And what an achievement! From students to students, all using Debian. Most of that group became close friends; I've married one of them, and a few of them also found their way to Debian.

3. How long have you been into Debian?

I first used Debian in 2001, but my first real connection with the community was attending DebConf 2004. Since then, going to DebConfs has become a habit. It is that moment in the year when I reconnect with the global community and my motivation to contribute is boosted. And you know, in 20 years I've seen people become parents, grandparents, *****ren grow up; we've had our own ***** and had the pleasure of introducing him to the community; we've mourned the loss of friends and healed together. I'd say Debian is like family, but not the kind you get at random once you're born, Debian is my family by choice.

4. Are you using Debian in your daily life? If yes, how?

These days I teach at Vanier College in Montréal. My favorite course to teach is UNIX, which I have the pleasure of teaching mostly using Debian. I try to inspire my students to discover Debian and other FLOSS projects, and we are happy to run a FLOSS club with participation from students, staff and alumni. I love to see these curious ***** minds put to the service of FLOSS. It is like recruiting soldiers for a good battle, and one that can change their lives, as it certainly did mine.

5. Do you have any suggestions to improve women's participation in Debian?

I think the most effective way to inspire other women is to give visibility to active women in our community. Speaking at conferences, publishing content, being vocal about what we do so that other women can see us and see themselves in those positions in the future. It's not easy, and I don't like being in the spotlight. It took me a long time to get comfortable with public speaking, so I can understand the struggle of those who don't want to expose themselves. But I believe that this space of vulnerability can open the way to new connections. It can inspire trust and ultimately motivate our next generation. It's with this in mind that I publish these lines.

Another point we can't neglect is that in Debian we work on a volunteer basis, and this in itself puts us at a great disadvantage. In our societies, women usually take a heavier load than their partners in terms of caretaking and other invisible tasks, so it is hard to afford the free time needed to volunteer. This is one of the reasons why I bring my son to the conferences I attend, and so far I have received all the support I need to attend DebConfs with him. It is a way to share the caregiving burden with our community - it takes a village to raise a *****. Besides allowing us to participate, it also serves to show other women (and men) that you can have a family life and still contribute to Debian.

My feeling is that we are not doing super well in terms of diversity in Debian at the moment, but that should not discourage us at all. That's the way it is now, but that doesn't mean it will always be that way. I feel like we go through cycles. I remember times when we had many more active female contributors, and I'm confident that we can improve our ratio again in the future. In the meantime, I just try to keep going, do my part, attract those I can, reassure those who are too scared to come closer. Debian is a wonderful community, it is a family, and of course a family cannot do without us, the women.

These interviews were conducted via email exchanges in October, 2024. Thanks to all the wonderful women who participated in this interview. We really appreciate your contributions in Debian and to Free/Libre software.

20 October, 2024 10:01PM by Anupa Ann Joseph

October 18, 2024

Reproducible Builds (diffoscope)

diffoscope 281 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 281. This version includes the following changes:

[ Chris Lamb ]
* Don't try and test with systemd-ukify within Debian stable.

[ Jelle van der Waa ]
* Add support for UKI files.

You find out more by visiting the project homepage.

18 October, 2024 12:00AM

October 16, 2024

Sahil Dhiman

25, A Quarter of a Century Later

25 the number says well into adulthood. Aviral pointed that I have already passed 33% mark in my life, which does hits different.

I had to keep reminding myself about my upcoming birthday. It didn’t felt like birthday month, week or the day itself.

My writings took a long hiatus starting this past year. The first post came out in May and quite a few people asked about the break. Hiatus had its own reasons, but restarting became harder each passing day afterward. Preparations for DebConf24 helped push DebConf23 (first post this year) out of the door, after which things were more or less back on track on the writing front.

Recently, I have picked the habit of reading monthly magazines. When I was a *****, I used to fancy seeing all the magazines on stationary and bookshops and thought of getting many when I’m older. Seems like that was the connection, and now I’m heavily into monthly magazines and order many each month (including Hindi ones). They’re fun short reads and cover a wide spectrum of topics.

Travelling has become the new found love. I got the opportunity to visit a few new cities like Jaipur, Meerut, Seoul and Busan. My first international travel showed me how a society which cares about the people’s overall wellbeing turns out to be. Going in foreign land, expanded the concept of everything for me. It showed the beauty of silence in public places. Also, re-visited Bengaluru, which felt good with its good weather and food.

It has become almost become tradition to attend a few events. Jashn-e-Rekhta, DebConf, New Delhi World Book Fair, IndiaFOSS and FoECon. It’s always great talking to new and old folks, sharing and learning about ideas. It’s hard for an individual to learn, grow and understand the world in a silo. Like I keep on saying about Free Software projects, it’s all about the people, it’s always about the people. Good and interesting people keep the project going and growing. (Side Note - it’s fine if a project goes. Things are not meant to last a perpetuity. Closing and moving on is fine). Similarly, I have been trying to attend Jaipur Literature Festival since a while but failing. Hopefully, I would this time around.

Expanding my Free Software Mirror to India was a big highlight this year. The mirror project now has 3 nodes in India and 1 in Germany, serving almost 3-4 TB of mirror traffic daily. Increasing the number of Software mirrors in India was and still is one of my goals. Hit me up if you want to help or setup one yourself. It’s not that hard now actually, projects that require more mirrors and hosting setup has already been figured out.

One realization I would like to mention was to amplify/support people who’re already doing (a better job) at it, rather than reinventing the wheel. A single person might not be able to change the world, but a bunch of people experimenting and trying to make a difference certainly would.

Writing 25 was felt harder than all previous years. It was a traditional year with much internal growth due to experiencing different perspectives and travelling.

To infinity and beyond!

16 October, 2024 03:07AM

October 15, 2024

Andrew Cater

Mini-DebConf Cambridge 20241013 1300

 LATE NEWS

 I haven't blogged until now: I should have done from Thursday onwards.

It's a joy to be here in Cambridge at ARM HQ. Lots of people I recognise from last year  here: lots *not* here because this mini-conference is a month before the next one in Toulouse and many people can't attend both.

Two days worth of chatting, working on bits and pieces, chatting and informal meetings was a very good and useful way to build relationships and let teams find some space for themselves.

Lots of quiet hacking going on - a few loud conversations. A new ARM machine in mini-ITX format - see Steve McIntyre's blog on planet.debian.org about Rock 5 ITX.

Two days worth of talks for Saturday and Sunday. For some people, this is a first time. Lightning talks are particularly good to break down barriers - three slides and five minutes (and the chance for a bit of gamesmanship to break the rules creatively).

Longer talks: a couple from Steve Capper of ARM were particularly helpful to those interested in upcoming development. A couple of the talks in the schedule are traditional: if the release team are here, they tell us what they are doing, for example.

ARM are main sponsors and have been very generous in giving us conference and facilities space. Fast network, coffee and interested people - what's not to like :)

[EDIT/UPDATE - And my talk is finished and went fairly well: slides have now been uploaded and the talk is linked from the Mini-DebConf pages]

15 October, 2024 10:13PM by Andrew Cater ([email protected])

Lukas Märdian

Waiting for a Linux system to be online

Designed by Freepik

What is an “online” system?

Networking is a complex topic, and there is lots of confusion around the definition of an “online” system. Sometimes the boot process gets delayed up to two minutes, because the system still waits for one or more network interfaces to be ready. Systemd provides the network-online.target that other service units can rely on, if they are deemed to require network connectivity. But what does “online” actually mean in this context, is a link-local IP address enough, do we need a routable gateway and how about DNS name resolution?

The requirements for an “online” network interface depend very much on the services using an interface. For some services it might be good enough to reach their local network segment (e.g. to announce Zeroconf services), while others need to reach domain names (e.g. to mount a NFS share) or reach the global internet to run a web server. On the other hand, the implementation of network-online.target varies, depending on which networking daemon is in use, e.g. systemd-networkd-wait-online.service or NetworkManager-wait-online.service. For Ubuntu, we created a specification that describes what we as a distro expect an “online” system to be. Having a definition in place, we are able to tackle the network-online-ordering issues that got reported over the years and can work out solutions to avoid delayed boot times on Ubuntu systems.

In essence, we want systems to reach the following networking state to be considered online:

  1. Do not wait for “optional” interfaces to receive network configuration
  2. Have IPv6 and/or IPv4 “link-local” addresses on every network interface
  3. Have at least one interface with a globally routable connection
  4. Have functional domain name resolution on any routable interface

A common implementation

NetworkManager and systemd-networkd are two very common networking daemons used on modern Linux systems. But they originate from different contexts and therefore show different behaviours in certain scenarios, such as wait-online. Luckily, on Ubuntu we already have Netplan as a unification layer on top of those networking daemons, that allows for common network configuration, and can also be used to tweak the wait-online logic.

With the recent release of Netplan v1.1 we introduced initial functionality to tweak the behaviour of the systemd-networkd-wait-online.service, as used on Ubuntu Server systems. When Netplan is used to drive the systemd-networkd backend, it will emit an override configuration file in /run/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf, listing the specific non-optional interfaces that should receive link-local IP configuration. In parallel to that, it defines a list of network interfaces that Netplan detected to be potential global connections, and waits for any of those interfaces to reach a globally routable state.

Such override config file might look like this:

[Unit]
ConditionPathIsSymbolicLink=/run/systemd/generator/network-online.target.wants/systemd-networkd-wait-online.service

[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online -i eth99.43:carrier -i lo:carrier -i eth99.42:carrier -i eth99.44:degraded -i bond0:degraded
ExecStart=/lib/systemd/systemd-networkd-wait-online --any -o routable -i eth99.43 -i eth99.45 -i bond0

In addition to the new features implemented in Netplan, we reached out to upstream systemd, proposing an enhancement to the systemd-networkd-wait-online service, integrating it with systemd-resolved to check for the availability of DNS name resolution. Once this is implemented upstream, we’re able to fully control the systemd-networkd backend on Ubuntu Server systems, to behave consistently and according to the definition of an “online” system that was lined out above.

Future work

The story doesn’t end there, because Ubuntu Desktop systems are using NetworkManager as their networking backend. This daemon provides its very own nm-online utility, utilized by the NetworkManager-wait-online systemd service. It implements a much higher-level approach, looking at the networking daemon in general instead of the individual network interfaces. By default, it considers a system to be online once every “autoconnect” profile got activated (or failed to activate), meaning that either a IPv4 or IPv6 address got assigned.

There are considerable enhancements to be implemented to this tool, for it to be controllable in a fine-granular way similar to systemd-networkd-wait-online, so that it can be instructed to wait for specific networking states on selected interfaces.

A note of caution

Making a service depend on network-online.target is considered an antipattern in most cases. This is because networking on Linux systems is very dynamic and the systemd target can only ever reflect the networking state at a single point in time. It cannot guarantee this state to be remained over the uptime of your system and has the potentially to delay the boot process considerably. Cables can be unplugged, wireless connectivity can drop, or remote routers can go down at any time, affecting the connectivity state of your local system. Therefore, “instead of wondering what to do about network.target, please just fix your program to be friendly to dynamically changing network configuration.” [source].

15 October, 2024 07:33AM by slyon

Iustin Pop

Optical media lifetime - one data point

Way back (more than 10 years ago) when I was doing DVD-based backups, I knew that normal DVDs/Blu-Rays are no long-term archival solutions, and that if I was real about doing optical media backups, I need to switch to M-Disc. I actually bought a (small stack) of M-Disc Blu-Rays, but never used them.

I then switched to other backups solutions, and forgot about the whole topic. Until, this week, while sorting stuff, I happened upon a set of DVD backups from a range of years, and was very curious whether they are still readable after many years.

And, to my surprise, there were no surprises! Went backward in time, and:

  • 2014, TDK DVD+R, fully readable
  • 2012, JVC DVD+R and TDK DVD+R, fully readable
  • 2010, Verbatim DVD+R, fully readable
  • 2009/2008/2007, Verbatim DVD+R, 4 DVDs, fully readable

I also found stack of dual-layer DVD+R from 2012-2014, some for sure Verbatim, and some unmarked (they were intended to be printed on), but likely Verbatim as well. All worked just fine. Just that, even at ~8GiB per disk, backing up raw photo files took way too many disks, even in 2014 😅.

At this point I was happy that all 12+ DVDs I found, ranging from 10 to 14 years, are all good. Then I found a batch of 3 CDs! Here the results were mixed:

  • 2003: two TDK “CD-R80”, “Mettalic”, 700MB: fully readable, after 21 years!
  • unknown year, likely around 1999-2003, but no later, “Creation” CD-R, 700MB: read errors to the extent I can’t even read the disk signature (isoinfo -d).

I think the takeaway is that for all explicitly selected media - TDK, JVC and Verbatim - they hold for 10-20 years. Valid reads from summer 2003 is mind boggling for me, for (IIRC) organic media - not sure about the “TDK metallic” substrate. And when you just pick whatever (“Creation”), well, the results are mixed.

Note that in all this, it was about CDs and DVDs. I have no idea how Blu-Rays behave, since I don’t think I ever wrote a Blu-Ray. In any case, surprising to me, and makes me rethink a bit my backup options. Sizes from 25 to 100GB Blu-Rays are reasonable for most critical data. And they’re WORM, as opposed to most LTO media, which is re-writable (and to some small extent, prone to accidental wiping).

Now, I should check those M-Disks to see if they can still be written to, after 10 years 😀

15 October, 2024 05:00AM

October 14, 2024

Scarlett Gately Moore

Kubuntu 24.10 Released, KDE Snaps at 24.08.2, and I lived to tell you about it!

Happy 28th birthday KDE!Happy 28th Birthday KDE!

Sorry my blog updates have been MIA. Let me tell you a story…

As some of you know, 3 months ago I was in a no fault car accident. Thankfully, the only injury was I ended up with a broken arm. ER sends me home in a sling and tells me it was a clean break and it will mend itself in no time. After a week of excruciating pain I went to my follow up doctor appointment, and with my x-rays in hand, the doc tells me it was far from a clean break and needs surgery. So after a week of my shattered bone scraping my nerves and causing pain I have never felt before, I finally go in for surgery! They put in a metal plate with screws to hold the bone in place so it can properly heal. The nerve pain was gone, so I thought I was on the mend. Some time goes by and the swelling still has not subsided, the doctors are not as concerned about this as I am, so I carry on until it becomes really inflamed and developed fever blisters. After no success in reaching the doctors office my husband borrows the neighbors car and rushes me to the ER. Good thing too, I had an infection. So after a 5 day stay in the hospital, they sent us home loaded with antibiotics and trained my husband in wound packing. We did everything right, kept the place immaculate, followed orders with the wound care, took my antibiotics, yet when they ran out there was still no sign of relief, or healing. Went to doctors and they gave me another month supply of antibiotics. Two days after my final dose my arm becomes inflamed again and with extra spectacular levels of pain to go with it. I call the doctor office… They said to come in on my appointment day ( 4 days away ). I asked, “You aren’t concerned with this inflammation?”, to which they replied, “No.”. Ok, maybe I am over reacting and it’s all in my head, I can power through 4 more days. The following morning my husband observed fever blisters and the wound site was clearly not right, so once again off we go to the ER. Well… thankfully we did. I was in Sepsis and could have died… After deliberating with the doctor on the course of action for treatment, the doctor accepted our plea to remove the plate, rather than tighten screws and have me drive 100 miles to hospital everyday for iv antibiotics (Umm I don’t have a car!?) So after another 4 day stay I am released into the world, alive and well. I am happy to report, the swelling is almost gone, the pain is minimal, and I am finally healing nicely. I am still in a sling and I have to be super careful and my arm was not fully knitted. So with that I am bummed to say, no traveling for me, no Ubuntu Summit 🙁

I still need help with that car, if it weren’t for our neighbor, this story would have ended much differently.

https://gofund.me/00942f47

Despite my tragic few months for my right arm, my left arm has been quite busy. Thankfully I am a lefty! On to my work progress report.

Kubuntu:

With Plasma 6! A big thank you to the Debian KDE/QT team and Rik Mills, could not have done it without you!

KDE Snaps:

All release service snaps are done! Save a few problematic ones still WIP.. I have released 24.08.2 which you can find here:

https://snapcraft.io/publisher/kde

I completed the qt6 and KDE frameworks 6 content packs for core24

Snapcraft:

I have a PR in for kde-neon-6 extension core24 support.

That’s all for now. Thanks for stopping by!

14 October, 2024 08:58PM by sgmoore

hackergotchi for Philipp Kern

Philipp Kern

Touch Notifications for YubiKeys

When setting up your YubiKey you have the option to require the user to touch the device to authorize an operation (be it signing, decrypting, or authenticating). While web browsers often provide clear prompts for this, other applications like SSH or GPG will not. Instead the operation will just hang without any visual indication that user input is required. The YubiKey itself will blink, but depending on where it is plugged in that is not very visible.

yubikey-touch-detector (fresh in unstable) solves this issue by providing a way for your desktop environment to signal the user that the device is waiting for a touch. It provides an event feed on a socket that other components can consume. It comes with libnotify support and there are some custom integrations for other environments.

For GNOME and KDE libnotify support should be sufficient, however you still need to turn it on:

$ mkdir -p ~/.config/yubikey-touch-detector
$ sed -e 's/^YUBIKEY_TOUCH_DETECTOR_LIBNOTIFY=.*/YUBIKEY_TOUCH_DETECTOR_LIBNOTIFY=true/' \
  < /usr/share/doc/yubikey-touch-detector/examples/service.conf.example \
  > ~/.config/yubikey-touch-detector/service.conf
$ systemctl --user restart yubikey-touch-detector

I would still have preferred a more visible, more modal prompt. I guess that would be an exercise for another time, listening to the socket and presenting a window. But for now, desktop notifications will do for me.

PS: I have not managed to get SSH's no-touch-required to work with YubiKey 4, while it works just fine with a YubiKey 5.

14 October, 2024 10:39AM by Philipp Kern ([email protected])

October 13, 2024

hackergotchi for Andy Simpkins

Andy Simpkins

The state of the art

A long time ago….

A long time ago a computer was a woman (I think almost exclusively a women, not a man) who was employed to do a lot of repetitive mathematics – typically for accounting and stock / order processing.

Then along came Lyons, who deployed an artificial computer to perform the same task, only with fewer errors in less time. Modern day computing was born – we had entered the age of the Digital Computer.

These computers were large, consumed huge amounts of power but were precise, and gave repeatable, verifiable results.

Over time the huge mainframe digital computers have shrunk in size, increased in performance, and consume far less power – so much so that they often didn’t need the specialist CFC based, refrigerated liquid cooling systems of their bigger mainframe counterparts, only requiring forced air flow, and occasionally just convection cooling. They shrank so far and became cheep enough that the Personal Computer became to be, replacing the mainframe with its time shared resources with a machine per user. Desktop or even portable “laptop” computers were everywhere.

We networked them together, so now we can share information around the office, a few computers were given specialist tasks of being available all the time so we could share documents, or host databases these servers were basically PCs designed to operate 24×7, usually more powerful than their desktop counterparts (or at least with faster storage and networking).

Next we joined these networks together and the internet was born. The dream of a paperless office might actually become realised – we can now send email (and documents) from one organisation (or individual) to another via email. We can make our specialist computers applications available outside just the office and web servers / web apps come of age.

Fast forward a few years and all of a sudden we need huge data-halls filled with “Rack scale” machines augmented with exotic GPUs and NPUs again with refrigerated liquid cooling, all to do the same task that we were doing previously without the magical buzzword that has been named AI; because we all need another dot com bubble or block chain band waggon to jump aboard. Our AI enabled searches take slightly longer, consume magnitudes more power, and best of all the results we are given may or may not be correct….

Progress, less precise answers, taking longer, consuming more power, without any verification and often giving a different result if you repeat your question AND we still need a personal computing device to access this wondrous thing.

Remind me again why we are here?

(time lines and huge swaves of history simply ignored to make an attempted comic point – this is intended to make a point and not be scholarly work)

13 October, 2024 03:15PM by andy

Taavi Väänänen

Bulk downloading Wikimedia Commons categories

Wikimedia Commons, the Wikimedia project for freely licensed media files, also contains a bunch of photos by me and photos of me at various events. While I don't think Commons is going away anytime soon, I would still like to have a local copy of those images available on my own storage hardware.

Obviously this requires some way to query for photos you want to download. I'm using Commons categories for this, since that's easy to implement and works for both use cases. The Commons community tends to come up with very specific categories that you can use, and if not, you can usually categorize the files yourself.

Me replying 'shh' to a Discord message showing myself categorizing photos about me and accusing me of COI editing

thankfully Commons has no such thing as a Conflict of interest (COI) policy

There is almost an existing tool for this: Sam Wilson's mwcli project has support for exporting images one has uploaded to Commons. However I couldn't use that to upload photos of me others have uploaded, plus it's written in PHP and I don't exactly want to deal with the problem of figuring out how to package it in a way I could neatly install it on my NAS.

So I wrote my own tool for it, called comload. It's written in Python because Python is easy to deploy (I can just throw it in a .deb and upload it to my internal repository), and because I did not find a Go library to handle Action API pagination for me. The basic usage is like this:

$ comload --subcats "Taavi Väänänen"

This will download any files in Category:Taavi Väänänen and its sub-categories to the current directory. Former image versions, as well as the image description and SDC data, if any, is also included. And it's smart enough to not download any files that are already there on future runs, so you can just throw it in a systemd timer to get any future files. I'd still like it to handle moved files without creating a duplicate copy, but otherwise I'm really happy with the current state.

comload is available from PyPI and from my Git server directly, and is licensed under the GPLv3.

13 October, 2024 12:00AM by Taavi Väänänen ([email protected])

October 11, 2024

hackergotchi for Steve McIntyre

Steve McIntyre

Rock 5 ITX

It's been a while since I've posted about arm64 hardware. The last machine I spent my own money on was a SolidRun Macchiatobin, about 7 years ago. It's a small (mini-ITX) board with a 4-core arm64 SoC (4 * Cortex-A72) on it, along with things like a DIMM socket for memory, lots of networking, 3 SATA disk interfaces.

The Macchiatobin was a nice machine compared to many earlier systems, but it took quite a bit of effort to get it working to my liking. I replaced the on-board U-Boot firmware binary with an EDK2 build, and that helped. After a few iterations we got a new build including graphical output on a PCIe graphics card. Now it worked much more like a "normal" x86 computer.

I still have that machine running at home, and it's been a reasonably reliable little build machine for arm development and testing. It's starting to show its age, though - the onboard USB ports no longer work, and so it's no longer useful for doing things like installation testing. :-/

So...

I was involved in a conversation in the #debian-arm IRC channel a few weeks ago, and diederik suggested the Radxa Rock 5 ITX. It's another mini-ITX board, this time using a Rockchip RK3588 *****U. Things have moved on - the *****U is now an 8-core big.LITTLE config: 4*Cortex A76 and 4*Cortex A55. The board has NVMe on-board, 4*SATA, built-in Mali graphics from the *****U, soldered-on memory. Just about everything you need on an SBC for a small low-power desktop, a NAS or whatever. And for about half the price I paid for the Macchiatobin. I hit "buy" on one of the listed websites. :-)

A few days ago, the new board landed. I picked the version with 24GB of RAM and bought the matching heatsink and fan. I set it up in an existing case borrowed from another old machine and tried the Radxa "Debian" build. All looked OK, but I clearly wasn't going to stay with that. Onwards to running a native Debian setup!

I installed an EDK2 build from https://github.com/edk2-porting/edk2-rk3588 onto the onboard SPI flash, then rebooted with a Debian 12.7 (Bookworm) arm64 installer image on a USB stick. How much trouble could this be?

I was shocked! It Just Worked (TM)

I'm running a standard Debian arm64 system. The graphical installer ran just fine. I installed onto the NVMe, adding an Xfce desktop for some simple tests. Everything Just Worked. After many years of fighting with a range of different arm machines (from simple SBCs to desktops and servers), this was without doubt the most straightforward setup I've ever done. Wow!

It's possible to go and spend a lot of money on an Ampere machine, and I've seen them work well too. But for a hobbyist user (or even a smaller business), the Rock 5 ITX is a lovely option. Total cost to me for the board with shipping fees, import duty, etc. was just over £240. That's great value, and I can wholeheartedly recommend this board!

The two things that are missing compared to the Macchiatobin? This is soldered-on memory (but hey, 24G is plenty for me!) It also doesn't have a PCIe slot, but it has sufficient onboard network, video and storage interfaces that I think it will cover most people's needs.

Where's the catch? It seems these are very popular right now, so it can be difficult to find these machines in stock online.

FTAOD, I should also point out: I bought this machine entirely with my own money, for my own use for development and testing. I've had no contact with the Radxa or Rockchip folks at all here, I'm just so happy with this machine that I've felt the need to shout about it! :-)

Here's some pictures...

Rock 5 ITX top view

Rock 5 ITX back panel view

Rock 5 EDK2 startuo

Rock 5 xfce login

Rock 5 ITX running Firefox

11 October, 2024 01:53PM