<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:georss="http://www.georss.org/georss"
        xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
        xmlns:media="http://search.yahoo.com/mrss/"><channel>
<title>RedAlder Blog</title>
<atom:link href="https://redalder.org/blog/rss.xml" rel="self" type="application/rss+xml" />
<link>https://redalder.org/blog</link>
<description><![CDATA[]]></description>
<language>en</language>
<pubDate>Thu, 16 Apr 2026 11:14:09 +0000</pubDate>
<lastBuildDate>Thu, 16 Apr 2026 11:14:09 +0000</lastBuildDate>
<generator>Emacs 30.2 Org-mode 9.7.11</generator>
<webMaster>magic_rb@redalder.org (magic_rb)</webMaster>


<item>
<title>I'm tired of this shit</title>
<link>https://redalder.org/blog/im-tired-of-this-shit.html</link>
<author>magic_rb@redalder.org (magic_rb)</author>
<guid isPermaLink="false">https://redalder.org/blog/im-tired-of-this-shit.html</guid>
<pubDate>Wed, 15 Apr 2026 00:00:00 +0000</pubDate>

<description><![CDATA[<p>
I'm tired of a lot of things, of some people, of politics, but what's currently grinding my gears the most are LLMs. No! Don't click away, I'm not about try to convince you that LLMs are bad, I'm tired of that too. What I hope to convey is how tired I am of discovering new software projects.
</p>

<p>
Back in the days of old, before this LLM shit took off, discovering a new project was pleasant. Someone would post a new Emacs package, or a new Haskell project on Reddit or Discourse, I'd click the link and then scroll pleasantly through the code. I would spend a couple of minutes reading code and enjoying myself. I'd peruse the repository, both learning and also trying to guess the skill level of the developer behind it. At the end of my reading session, I'd then decide if I wanted to use the project or not. Things were simple back then.
</p>

<p>
But then LLMs came along and people got upset - in my opinion for good reason - and so they starting hating on anyone and anything even remotely touched by an LLM. The natural and expected response to that is LLM proponents hiding the fact that they use LLMs. In effect, lying to me.
</p>

<p>
These days, discovering a new project is a chore, because first things first, I have to answer the unpleasant question:
</p>

<blockquote>
<p>
Is this person trying to lie to me?
</p>
</blockquote>

<p>
I don't enjoy second-guessing everything, I want to write software, share it freely with others and play with computers. Not be a human LLM detector.
</p>

<p>
Please stop hiding that you're using LLMs, I think most people are tired enough where they won't lynch you for using an LLM. Honestly disclosing LLM use allows me to skip your project, or look at it later when I have the mental capacity to look for extremely subtle bugs that you didn't catch.
</p>

<p>
Side note, I personally think that when you post a project on Reddit/Discourse the post itself should declare that you used an LLM. We all know that LLMs are a touchy subject and as such I find it disrespectful when you don't do that. Knowing in advance what to expect lowers my disappointment and as said before, allows me to focus my energy on actually reading the code and not trying to detect LLM use.
</p>
]]></description>
</item>
<item>
<title>disko-zfs: Declaratively Managing ZFS Datasets</title>
<link>https://redalder.org/blog/disko-zfs-declaratively-managing-zfs-datasets.html</link>
<author>magic_rb@redalder.org (magic_rb)</author>
<guid isPermaLink="false">https://redalder.org/blog/disko-zfs-declaratively-managing-zfs-datasets.html</guid>
<pubDate>Fri, 23 Jan 2026 00:00:00 +0000</pubDate>

<description><![CDATA[<p>
If you're at least somewhat like me, then you use ZFS almost religiously. Every server, every laptop, every appliance (excluding those that really could not run ZFS due to memory constraints or fragile flash storage) runs ZFS. You run ZFS even if you don't use mirrors, stripes, or any kind of special vdevs. You run ZFS because you want to, because having snapshots, datasets and compression makes the device feel familiar and welcoming. If that's at least somewhat you, you will appreciate the tool I have for you in store today.
</p>

<p>
Too Many Datasets
</p>

<p>
Given a situation where a ZFS pool has just too many datasets for you to comfortably manage, or perhaps you have a few datasets, but you just learned of a property that you really <i>should</i> have set from the start, what do you do? Well, I don't know what <i>you</i> do, I would love to hear about that, so please do reach out to me, over Matrix preferably.
</p>

<p>
In any case, what I came up with is <a href="https://github.com/numtide/disko-zfs">disko-zfs</a>. A simple Rust program that will declaratively manage datasets on a zpool. It does this based on a JSON specification, which lists the datasets, their properties and a few pieces of extra information.
</p>

<p>
The Schema
</p>

<div class="org-src-container">
<label class="org-src-name"><span class="listing-number">Listing 1: </span>A production specification from one of Numtide's machines</label><pre class="src src-js-json">{
  <span class="org-string">"datasets"</span>: {
    <span class="org-string">"zroot"</span>: {
      <span class="org-string">"properties"</span>: {
        <span class="org-string">"atime"</span>: <span class="org-string">"off"</span>,
        <span class="org-string">"com.sun:auto-snapshot"</span>: <span class="org-string">"false"</span>,
        <span class="org-string">"compression"</span>: <span class="org-string">"zstd-2"</span>,
        <span class="org-string">"dnodesize"</span>: <span class="org-string">"auto"</span>,
        <span class="org-string">"mountpoint"</span>: <span class="org-string">"none"</span>,
        <span class="org-string">"recordsize"</span>: <span class="org-string">"128K"</span>,
        <span class="org-string">"xattr"</span>: <span class="org-string">"on"</span>
      }
    },
    ...
    <span class="org-string">"zroot/ds1/persist/var/lib/forgejo"</span>: {
      <span class="org-string">"properties"</span>: {
        <span class="org-string">"mountpoint"</span>: <span class="org-string">"legacy"</span>
      }
    },
    <span class="org-string">"zroot/ds1/persist/var/lib/postgresql"</span>: {
      <span class="org-string">"properties"</span>: {
        <span class="org-string">"mountpoint"</span>: <span class="org-string">"legacy"</span>,
        <span class="org-string">"recordsize"</span>: <span class="org-string">"8k"</span>
      }
    },
    ...
  },
  <span class="org-string">"ignoredDatasets"</span>: [
    <span class="org-string">"zroot/ds1/root/*"</span>
  ],
  <span class="org-string">"ignoredProperties"</span>: [
    <span class="org-string">"nixos:shutdown-time"</span>,
    <span class="org-string">":generation"</span>,
    <span class="org-string">"com.sun:auto-snapshot"</span>
  ],
  <span class="org-string">"logLevel"</span>: <span class="org-string">"info"</span>
}
</pre>
</div>

<p>
As you can see, it's relatively self explanatory. The information that this JSON format carries is:
</p>

<p>
an attribute set of datasets and their properties
a list of ignored datasets that <code>disko-zfs</code> will never create, modify, or destroy
a list of ignored properties that <code>disko-zfs</code> will never create, modify, or delete
the loglevel at which <code>disko-zfs</code> logs
</p>

<p>
With the schema in hand, we can execute <code class="src src-bash">disko-zfs apply --file spec.json</code> on a live machine, where we get the following output:
</p>

<div class="org-src-container">
<pre class="src src-furdamental"># !! Destructive Commands !!
&gt; zfs destroy zroot/ds1/persist/var/lib/gitea
# Applying...
+ zfs set mountpoint=legacy zroot/ds1/persist/var/lib/forgejo
+ zfs create -orecordsize=8k -omountpoint=legacy zroot/ds1/persist/var/lib/forgejo
# Done!
</pre>
</div>

<p>
It tells us a few things:
</p>

<p>
that <code>zfs destroy zroot/ds1/persist/var/lib/gitea</code> would be executed, but it wasn't
that the following two commands have been executed:
<code>zfs set mountpoint=legacy zroot/ds1/persist/var/lib/forgejo</code>
<code>zfs create -orecordsize=8k -omountpoint=legacy zroot/ds1/persist/var/lib/forgejo</code>
</p>

<p>
If we run <code class="src src-bash">disko-zfs apply --file spec.json</code> again on the same machine, <code>disko-zfs</code> will correctly report that there is nothing to do.
</p>

<p>
Instead of directly applying the configuration we could have used the <code>plan</code> subcommand to tell <code>disko-zfs</code> just tell us what it would do, if we executed <code>apply</code>, even for non-destructive commands. This is a good way of verifying that <code>disko-zfs</code> won't do anything unexpected.
</p>

<p>
NixOS Integration
</p>

<p>
If you also happen to be a NixOS user, then this section will be of interest to you. <code>disko-zfs</code> supports NixOs natively; it exposes a NixOS module, which adds the <code>disko.zfs</code> option group. It importantly adds the <code>disko.zfs.settings</code> option, whose structure is isomorphic to the structure used by the <code>disko-zfs</code> program itself; as such we can directly translate the previously shown JSON to Nix and everything will work as expected, neat!
</p>

<div class="org-src-container">
<pre class="src src-nix">{
  <span class="org-nix-attribute">disko.zfs</span> = {
    <span class="org-nix-attribute">enable</span> = <span class="org-nix-builtin">true</span>;
    <span class="org-nix-attribute">settings</span> = {
      <span class="org-nix-attribute">datasets</span> = {
        <span class="org-string">"zroot"</span> = {
          <span class="org-nix-attribute">properties</span> = {
            <span class="org-nix-attribute">atime</span> = <span class="org-string">"off"</span>;
            <span class="org-string">"com.sun:auto-snapshot"</span> = <span class="org-string">"false"</span>;
            <span class="org-nix-attribute">compression</span> = <span class="org-string">"zstd-2"</span>;
            <span class="org-nix-attribute">dnodesize</span> = <span class="org-string">"auto"</span>;
            <span class="org-nix-attribute">mountpoint</span> = <span class="org-string">"none"</span>;
            <span class="org-nix-attribute">recordsize</span> = <span class="org-string">"128K"</span>;
            <span class="org-nix-attribute">xattr</span> = <span class="org-string">"on"</span>;
          };
        };
        ...
        <span class="org-string">"zroot/ds1/persist/var/lib/forgejo"</span> = {
          <span class="org-nix-attribute">properties</span> = {
            <span class="org-nix-attribute">mountpoint</span> = <span class="org-string">"legacy"</span>;
          };
        };
        <span class="org-string">"zroot/ds1/persist/var/lib/postgresql"</span> = {
          <span class="org-nix-attribute">properties</span> = {
            <span class="org-nix-attribute">mountpoint</span> = <span class="org-string">"legacy"</span>;
            <span class="org-nix-attribute">recordsize</span> = <span class="org-string">"8k"</span>;
          };
        };
        ...
      };
      <span class="org-nix-attribute">ignoredDatasets</span> = [
        <span class="org-string">"zroot/ds1/root/*"</span>
      ];
      <span class="org-nix-attribute">ignoredProperties</span> = [
        <span class="org-string">"nixos:shutdown-time"</span>
        <span class="org-string">":generation"</span>
        <span class="org-string">"com.sun:auto-snapshot"</span>
      ];
      <span class="org-nix-attribute">logLevel</span> = <span class="org-string">"info"</span>;
    };
  };
}
</pre>
</div>

<p>
Given the above NixOS configuration, next time you run <code class="src src-bash">nixos-rebuild switch --flake .#your-server</code>, <code>disko-zfs</code> will make any changes necessary to bring your zpool into shape. You can also execute <code class="src src-bash">nixos-rebuild dry-activate --flake .#your-server</code> instead, which will cause <code>disko-zfs</code> to run in <code>plan</code> mode and merely print out the changes it would make, nifty way to test your changes.
</p>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-disko-zfs-declaratively-managing-zfs-datasets-org-disko-integration" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-disko-zfs-declaratively-managing-zfs-datasets-org-disko-integration"><span class="section-number-3">2.1.</span> Disko Integration</h3>
<div class="outline-text-3" id="text-2-1">
<p>
And if you use ZFS, NixOS, <b>and</b> <code>disko</code> then I have another thing in store for you. <code>disko-zfs</code> integrates with, well, <code>disko</code>, who would have expected that?
</p>

<p>
If the <code>disko-zfs</code> module detects that <code>disko</code> is also imported, it will translate any zpool and datasets declared through disko into the format used by <code>disko-zfs</code> and automatically apply them to the <code>disko.zfs.settings.datasets</code> option. That means that you can still declare you datasets through <code>disko</code> and let <code>disko</code> do the thing it's great at, installation. Once however you need to create more datasets, or perhaps delete a dataset, that's where <code>disko-zfs</code> kicks in and realizes your wishes. With <code>disko-zfs</code>, <code>disko</code> gains the power to keep your datasets up to date even after installation.
</p>
</div>
</div>
]]></description>
</item>
<item>
<title>Convenient Inconvenience</title>
<link>https://redalder.org/blog/convenient-inconvenience.html</link>
<author>magic_rb@redalder.org (magic_rb)</author>
<guid isPermaLink="false">https://redalder.org/blog/convenient-inconvenience.html</guid>
<pubDate>Sun, 31 Dec 2023 00:00:00 +0000</pubDate>

<description><![CDATA[<p>
I've grown up with technology, when I was 10 it was already the 2010s, I never had the option of avoiding technology as back then I wasn't old enough to make these sort of decisions.
</p>

<p>
YouTube
Last year, while talking to a great friend of mine about addictions, he mentioned that making things inconvenient for oneself is a good way to stop doing them. I started off by DNS blocking YouTube domains on all my devices, the reason was that I was addicted to watching speedruns and letting YouTube lull me into a state of mindless video consumption. I could still access YouTube if I wanted to through alternative frontends like Invidious or Piped, but let's be frank, the UX of those slightly sucks. Surprisingly though their bad UX is a good thing. As expected I stopped frequenting YouTube and began doing other things, like watching TV shows, programming, actually playing the games I was watching people play online. In the end this experiment turned out to be a great success.
</p>

<p>
Discord
Next I wanted to tackle Discord. Current alternative Discord clients focus on making the Discord experience better, but I weirdly wanted a worse one. So I looked at terminal clients briefly, but those are hard to use, don't display images and don't really work well on mobile. By that point I've already setup Matrix and have been planning to use it to bridge services like Slack and Facebook Messenger that I don't like the UX of anyway.
</p>

<p>
So I bridged Discord through Matrix and joined only a select few Discord servers on a completely new Discord account. As expected the time spent on Discord went down a lot, I stopped idly chatting with people and looking at memes. My overall life quality went up and then later I expanded my new life style even more by <a href="https://redalder.org/blog/./limiting-online-communication.html">limiting online communication</a> in general.
</p>

<p>
Moral of the Rambling
What I wanted to get across using this post is, that making things inconvenient for oneself works great if one wishes to control how often they do said things. This applies to gaming, YouTube, social media and many more things I personally cannot vouch for. It does of course require sacrifices and a rather large amount of time depending on what technical solutions one decides to adopt.
</p>

<p>
I would also like to warn that it might lead to a slight disconnect from current events and from your friends. An overwhelming majority of people frequent Discord and other social media, especially younger folk, which means that if one disconnects from such platforms they loose out on a significant portion of socializing their peers do.
</p>

<p>
A good way to combat feelings of loneliness that may arise from radical changes in ones life is to realize how people lived in the past and to cherish real world interactions much more. Each conversation with a friend is a gift and it is imperative that it is taken seriously and thoroughly enjoyed.
</p>
]]></description>
</item>
<item>
<title>Limiting Online Communication</title>
<link>https://redalder.org/blog/limiting-online-communication.html</link>
<author>magic_rb@redalder.org (magic_rb)</author>
<guid isPermaLink="false">https://redalder.org/blog/limiting-online-communication.html</guid>
<pubDate>Sun, 15 Oct 2023 00:00:00 +0000</pubDate>

<description><![CDATA[<p>
Today I'm starting a new experiment for my well-being. I have decided to cut down on almost all forms of instant messaging. Over the past few years I've noticed how attached to phones and other electronic devices my generation (I'm 21) is. I've also talked to multiple people that are older than me, who've also expressed concern about how my generation functions. Phones and instant messaging have become "the solutions" to problems such as: talking about hard topics, laziness to leave the house, the concept of a social battery, not being happy with yourself, stuff like that.
</p>

<p>
Reasoning
</p>

<p>
I can speak from my own experience that discussing hard topics is easier through text and even easier in a non-native tongue (English). Talking about serious topics in person is almost never done, if it is, it doesn't go well. When I was talking to my friends over Discord and we started talking about something serious we'd instinctively switch to English.
</p>

<p>
Many people, me included, don't want to leave the house for a multitude of different reasons. It may be some random onset of social awkwardness, the feeling that everyone outside has nothing better to do than watch you and judge you (I have that too sometimes and it's the silliest thing ever). This then leads to unsocial weekends as generally during the week you're forced to leave the house, be it university or work. These solitary weekends then have negative consequences as one generally realizes that such a weekend is not good for them, which leads to feeling worse which leads to not leaving the house more. This starts a vicious cycle, but we'll come back to that later.
</p>

<p>
There are also those that have grown accustomed to using "the social battery" as an excuse to not be social. They decide they have a set amount of socializing they're able to do per day or week. If they go over that arbitrary amount, they start to manifest the feelings of being socially exhausted and then proceed to end the session. Interestingly, chatting online seems to deplete "the battery" less. Which makes sense, I'm not saying that this behavior is completely self-induced and made up, but I do think people that function like this, reinforce it within themselves. The fact that chatting online seems to not be as draining leads to such people relying on online communication to not feel alone while not going over their quotas. This leads to, what I would describe as a fake feeling of companionship.
</p>

<p>
This last point is applies to me personally. I seem to not be able to be alone, not because I need people 24/7, but because I don't seem to be able to be alone with just myself. I think it's a case of distracting myself with others, to not have to deal with me and how I feel. When the distraction goes away, I have no choice but to focus on myself again which then leads to feeling of sadness and such stuff. I have not observed this phenomenon in other people, but frankly this is a thing which manifests while alone and understandably people are reluctant to talk about it. As talking about it breaks the illusion that the distraction of socializing provides.
</p>

<p>
It is also interesting that people that live in huge cities feel more alone than those that live in small towns and villages. One of the causes is in my opinion the fact that in such a huge city everyone lives somewhere completely different so it's impractical to meet up randomly and on short notice. Meetings are arranged before-hand and a good while before they take place. While going to and from a social hangout, one is completely alone, even when surrounded by dozens of people on the metro, tram, bus or even while just walking home. The density of people one actually interacts with goes down, the rest may as well just be NPCs.
</p>

<p>
Effect
</p>

<p>
Due to all described above, I've decided that I will only use online communication mediums like Discord, Whatsapp, Matrix to arrange meetings and other real life events. I want to hang out with real people in the real world not with profile pictures on my phone. I've spent way too much time chatting with people, even though I care about those people, If I'm to continue interacting with them they'll have to be willing to meet up with me. I am more than willing to that and I hope others will be willing to do the same. If not then it is a shame but I will not stare into my phone alone. I'm sure some will be unhappy about this new policy, I'm sure I will loose a few friends, but my hope is that this change will lead to better friendships and a better well-being for me and my remaining friends. I refuse to be a part of and contribute to the degradation of humanity due to the internet and its less than amazing aspects.
</p>
]]></description>
</item>
<item>
<title>Packaging Searx - Part 1</title>
<link>https://redalder.org/blog/packaging-searx/part1.html</link>
<author>magic_rb@redalder.org (magic_rb)</author>
<guid isPermaLink="false">https://redalder.org/blog/packaging-searx/part1.html</guid>
<pubDate>Sun, 24 Jul 2022 00:00:00 +0000</pubDate>

<description><![CDATA[<p>
In this N part blog post series, I'll show you the exact process of packaging <a href="https://github.com/searx/searx">Searx</a> a meta seach engine. Here's an excerpt from Searx's readme to shine a bit of light on what we'll be packaging.
</p>

<blockquote>
<p>
Searx is a free internet metasearch engine which aggregates results from more than 70 search services. Users are neither tracked nor profiled. Additionally, searx can be used over Tor for online anonymity.
</p>
</blockquote>

<p>
So if you're a privacy nerd or want ensure Google doesn't know what you're cooking tonight, read on and you'll learn how Searx works from a system administrator and packager perspective.
</p>

<p>
Searx is already packaged in nixpkgs, but for the sake of this blog post, let's pretend it isn't. I'll go over all the things I check, verify and all the things I do when packaging. So I'll quit mumbling and start Nix-ing!
</p>

<p>
Discovery
</p>

<p>
First it's imperative that we find the upstream repo we'll be working with, it may sound simple enough, and in the case of Searx it luckily is, but it can also be challenging. It all depends on how well-known the project is and how unique the name is. My recommendation is to use a search engine and search for <code>searx git</code> in this case, which gets us <a href="https://github.com/searx/searx">https://github.com/searx/searx</a>.
</p>

<p>
Now that we have a link to the repo, we need to identify the language and in the case of some languages the build system. There are several ways to do this, one is to just look at the root of the repo and look for a few recognizable files. I'll leave an incomplete table below.
</p>

<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">


<colgroup>
<col  class="org-left" />

<col  class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">files / directories</th>
<th scope="col" class="org-left">language / build system</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">Cargo.toml, Cargo.lock</td>
<td class="org-left">Rust - cargo</td>
</tr>

<tr>
<td class="org-left">requirements.txt, setup.py</td>
<td class="org-left">Python 2/3</td>
</tr>

<tr>
<td class="org-left">CMakeFiles.txt</td>
<td class="org-left">C, C++ - cmake</td>
</tr>

<tr>
<td class="org-left">meson.build</td>
<td class="org-left">C, C++ - meson</td>
</tr>

<tr>
<td class="org-left">composer.json, composer.lock</td>
<td class="org-left">PHP - composer</td>
</tr>

<tr>
<td class="org-left">package.json, package-lock.json</td>
<td class="org-left">Node - npm</td>
</tr>

<tr>
<td class="org-left">package.json, yarn.lock</td>
<td class="org-left">Node - yarn</td>
</tr>

<tr>
<td class="org-left">*.cabal, stack.yaml, package.yaml</td>
<td class="org-left">Haskell - stack/cabal</td>
</tr>
</tbody>
</table>

<p>
I won't list tools to use when packaging these different languages, because the recommended set changes often and I'd have to keep this blog post up to date :), but it's easy enough to search for them. Generally if you search for <code>&lt;package-manager&gt;2nix</code>.
</p>

<p>
Looking at the repository we see a <code>requirements.txt</code> and a <code>setup.py</code>, the first one is valuable because we <b>should</b> have a list of all python packages we need and the second we need to keep in mind, since it contains custom arbitrary python that we may need inspect and fix.
</p>

<div class="org-src-container">
<pre class="src src-fundamental">certifi==2022.5.18.1
babel==2.9.1
flask-babel==2.0.0
flask==2.1.1
jinja2==3.1.2
lxml==4.9.0
pygments==2.8.0
python-dateutil==2.8.2
pyyaml==6.0
httpx[http2]==0.23.0
Brotli==1.0.9
uvloop==0.16.0; python_version &gt;= '3.7'
uvloop==0.14.0; python_version &lt; '3.7'
httpx-socks[asyncio]==0.7.4
langdetect==1.0.9
setproctitle==1.2.2
</pre>
</div>

<p>
It's also worth looking at the <code>Dockerfile</code> and any <code>Makefile</code>, <code>Justfile</code>, or <code>scripts</code> folder. Here we have a <code>Dockerfile</code> and also a <code>Makefile</code>, lucky! Let's start with the <code>Dockerfile</code>, I'll pick out the important bits only.
</p>

<div class="org-src-container">
<pre class="src src-dockerfile"><span class="org-keyword">FROM</span> <span class="org-dockerfile-image-name">alpine:3.15</span>
</pre>
</div>

<p>
A pretty crucial piece of information here, we now know both the distro the container uses so we can descern the environment a bit and that Searx will happily run on musl libc.
</p>

<div class="org-src-container">
<pre class="src src-dockerfile"><span class="org-keyword">ENTRYPOINT</span> [<span class="org-string">"/sbin/tini"</span>,<span class="org-string">"--"</span>,<span class="org-string">"/usr/local/searx/dockerfiles/docker-entrypoint.sh"</span>]
</pre>
</div>

<p>
Here we see where we should look for the startup script.
</p>

<div class="org-src-container">
<pre class="src src-dockerfile"><span class="org-keyword">ENV</span> <span class="org-variable-name">INSTANCE_NAME</span>=searx <span class="org-sh-escaped-newline">\</span>
    <span class="org-variable-name">AUTOCOMPLETE</span>= <span class="org-sh-escaped-newline">\</span>
    <span class="org-variable-name">BASE_URL</span>= <span class="org-sh-escaped-newline">\</span>
    <span class="org-variable-name">MORTY_KEY</span>= <span class="org-sh-escaped-newline">\</span>
    <span class="org-variable-name">MORTY_URL</span>= <span class="org-sh-escaped-newline">\</span>
    <span class="org-variable-name">SEARX_SETTINGS_PATH</span>=/etc/searx/settings.yml <span class="org-sh-escaped-newline">\</span>
    <span class="org-variable-name">UWSGI_SETTINGS_PATH</span>=/etc/searx/uwsgi.ini
</pre>
</div>

<p>
Here we have a <b>incomplete</b> list of arguments we can pass to into the Docker container, it's important to later notice where they're handled, in the scripting or in the actual program itself?
</p>

<div class="org-src-container">
<pre class="src src-shell">apk add --no-cache -t build-dependencies <span class="org-sh-escaped-newline">\</span>
  build-base <span class="org-sh-escaped-newline">\</span>
  py3-setuptools <span class="org-sh-escaped-newline">\</span>
  python3-dev <span class="org-sh-escaped-newline">\</span>
  libffi-dev <span class="org-sh-escaped-newline">\</span>
  libxslt-dev <span class="org-sh-escaped-newline">\</span>
  libxml2-dev <span class="org-sh-escaped-newline">\</span>
  openssl-dev <span class="org-sh-escaped-newline">\</span>
  tar <span class="org-sh-escaped-newline">\</span>
  git <span class="org-sh-escaped-newline">\</span>
</pre>
</div>

<p>
Here we see a list of packages installed with apt, but you (and me actually) may not know what does <code>-t build-dependencies</code> do. It's best to look at the manpage for <code>apk add</code>, so search for <code>apk-add man</code>. According to <a href="https://www.mankier.com/8/apk-add">https://www.mankier.com/8/apk-add</a> <code>-t</code> adds a virtual package with the dependencies listed on the command line and then installs that package. So we have one package <code>build-dependencies</code> containing a set of packages we need at build time.
</p>

<div class="org-src-container">
<pre class="src src-shell">apk add --no-cache <span class="org-sh-escaped-newline">\</span>
  ca-certificates <span class="org-sh-escaped-newline">\</span>
  su-exec <span class="org-sh-escaped-newline">\</span>
  python3 <span class="org-sh-escaped-newline">\</span>
  py3-pip <span class="org-sh-escaped-newline">\</span>
  libxml2 <span class="org-sh-escaped-newline">\</span>
  libxslt <span class="org-sh-escaped-newline">\</span>
  openssl <span class="org-sh-escaped-newline">\</span>
  tini <span class="org-sh-escaped-newline">\</span>
  uwsgi <span class="org-sh-escaped-newline">\</span>
  uwsgi-python3 <span class="org-sh-escaped-newline">\</span>
  brotli <span class="org-sh-escaped-newline">\</span>
</pre>
</div>

<p>
Next we have a list of packages needed at runtime, this one is really important to remember since we may have to add these in a special way later. You'll see what I mean.
</p>

<div class="org-src-container">
<pre class="src src-shell">pip3 install --upgrade pip wheel setuptools <span class="org-sh-escaped-newline">\</span>
</pre>
</div>

<p>
Then it upgrades <code>pip</code>, <code>wheel</code>, and <code>setuptools</code>. I personally had to look up what <code>wheel</code> is. But looking at <a href="https://pkgs.alpinelinux.org/packages?name=*wheel*&amp;branch=edge">Alpine Linux packages</a> yields no results, so let's just ignore it for now. If it doesn't come up later it's not important.
</p>

<div class="org-src-container">
<pre class="src src-shell">pip3 install --no-cache -r requirements.txt <span class="org-sh-escaped-newline">\</span>
</pre>
</div>

<p>
Second to last it installs the packages specied in <code>requirements.txt</code> as expected.
</p>

<div class="org-src-container">
<pre class="src src-shell">apk del build-dependencies <span class="org-sh-escaped-newline">\</span>
&amp;&amp; rm -rf /root/.cache
</pre>
</div>

<p>
And lastly it does some cleanup. Which is interesting, because I expected those dependencies to be used later by some custom searx native component, but I guess it makes sense they're not.
</p>

<div class="org-src-container">
<pre class="src src-dockerfile"><span class="org-keyword">COPY</span> searx ./searx
<span class="org-keyword">COPY</span> dockerfiles ./dockerfiles
</pre>
</div>

<p>
We now see where that startup script comes from.
</p>

<div class="org-src-container">
<pre class="src src-dockerfile"><span class="org-keyword">RUN</span> /usr/bin/python3 -m compileall -q searx; <span class="org-sh-escaped-newline">\</span>
    touch -c --<span class="org-variable-name">date</span>=@${<span class="org-variable-name">TIMESTAMP_SETTINGS</span>} searx/settings.yml; <span class="org-sh-escaped-newline">\</span>
    touch -c --<span class="org-variable-name">date</span>=@${<span class="org-variable-name">TIMESTAMP_UWSGI</span>} dockerfiles/uwsgi.ini; <span class="org-sh-escaped-newline">\</span>
    <span class="org-keyword">if</span> [ <span class="org-negation-char">!</span> -z $<span class="org-variable-name">VERSION_GITCOMMIT</span> ]; <span class="org-keyword">then</span><span class="org-sh-escaped-newline">\</span>
      <span class="org-builtin">echo</span> <span class="org-string">"VERSION_STRING = VERSION_STRING + \"-$VERSION_GITCOMMIT\""</span> &gt;&gt; /usr/local/searx/searx/version.py; <span class="org-sh-escaped-newline">\</span>
    <span class="org-keyword">fi</span>; <span class="org-sh-escaped-newline">\</span>
    find /usr/local/searx/searx/static -a <span class="org-string">\(</span> -name <span class="org-string">'*.html'</span> -o -name <span class="org-string">'*.css'</span> -o -name <span class="org-string">'*.js'</span> <span class="org-sh-escaped-newline">\</span>
    -o -name <span class="org-string">'*.svg'</span> -o -name <span class="org-string">'*.ttf'</span> -o -name <span class="org-string">'*.eot'</span> <span class="org-string">\)</span> <span class="org-sh-escaped-newline">\</span>
    -type f -exec gzip -9 -k {} <span class="org-string">\+</span> -exec brotli --best {} <span class="org-string">\+</span>
</pre>
</div>

<p>
This is a complicated little beast, we see <code>searx/settings.yml</code> <code>dockerfiles/uwsgi.ini</code> and <code>/usr/local/searx/searx/version.py</code>, we also see that it compiles all the python files, but that will be taken care of by nixpkgs. Interestingly it also compresses all the assets with gzip. The find command looks for all files with <code>.html</code>, <code>.css</code>, <code>.js</code>, <code>.svg</code>, <code>.ttf</code> and <code>.eot</code>, then executes <code>gzip -9 -k</code> and <code>brotli --best</code>. (here I had to again search for what's brotli). (it looks to be a <a href="https://github.com/google/brotli">compression scheme</a>)
</p>

<p>
That's all from the Dockerfile. Now we need to look at the script it calls.
</p>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-packaging-searx-part1-org-docker-entrypoint-sh-script" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-packaging-searx-part1-org-docker-entrypoint-sh-script"><span class="section-number-3">5.1.</span> <code>docker-entrypoint.sh</code> script</h3>
<div class="outline-text-3" id="text-5-1">
<div class="org-src-container">
<pre class="src src-shell"><span class="org-builtin">printf</span> <span class="org-string">"\nEnvironment variables:\n\n"</span>
<span class="org-builtin">printf</span> <span class="org-string">"  INSTANCE_NAME settings.yml : general.instance_name\n"</span>
<span class="org-builtin">printf</span> <span class="org-string">"  AUTOCOMPLETE  settings.yml : search.autocomplete\n"</span>
<span class="org-builtin">printf</span> <span class="org-string">"  BASE_URL      settings.yml : server.base_url\n"</span>
<span class="org-builtin">printf</span> <span class="org-string">"  MORTY_URL     settings.yml : result_proxy.url\n"</span>
<span class="org-builtin">printf</span> <span class="org-string">"  MORTY_KEY     settings.yml : result_proxy.key\n"</span>
<span class="org-builtin">printf</span> <span class="org-string">"  BIND_ADDRESS  uwsgi bind to the specified TCP socket using HTTP protocol. Default value: \"${DEFAULT_BIND_ADDRESS}\"\n"</span>
</pre>
</div>

<p>
That's a nice little rundown of the supported configuration options and also that Searx is configured with <code>settings.yml</code>, this knowledge will come in handy when we're writing the NixOS module for Searx.
</p>

<div class="org-src-container">
<pre class="src src-shell"><span class="org-comment-delimiter"># </span><span class="org-comment">update settings.yml
</span>sed -i -e <span class="org-string">"s|base_url : False|base_url : ${BASE_URL}|g"</span> <span class="org-sh-escaped-newline">\</span>
   -e <span class="org-string">"s/instance_name : \"searx\"/instance_name : \"${INSTANCE_NAME}\"/g"</span> <span class="org-sh-escaped-newline">\</span>
   -e <span class="org-string">"s/autocomplete : \"\"/autocomplete : \"${AUTOCOMPLETE}\"/g"</span> <span class="org-sh-escaped-newline">\</span>
   -e <span class="org-string">"s/ultrasecretkey/$(</span><span class="org-sh-quoted-exec">openssl rand -hex 32</span><span class="org-string">)/g"</span> <span class="org-sh-escaped-newline">\</span>
   <span class="org-string">"${CONF}"</span>
</pre>
</div>

<p>
This command confirms that in fact we're dealing with a settings.yaml.
</p>

<div class="org-src-container">
<pre class="src src-shell">sed -i -e <span class="org-string">"s/image_proxy : False/image_proxy : True/g"</span> <span class="org-sh-escaped-newline">\</span>
            <span class="org-string">"${CONF}"</span>
cat &gt;&gt; <span class="org-string">"${CONF}"</span> &lt;&lt;-EOF<span class="org-sh-heredoc">

# Morty configuration
result_proxy:
   url : ${MORTY_URL}
   key : !!binary "${MORTY_KEY}"
EOF</span>
</pre>
</div>

<p>
This bit is interesting, I initially thought that the script updates the existing config with new values, but the code block above would mean that on every restart a new <code>result_proxy</code> block would be added. Which means that it must take a default config, write your settings in and replace the current one with that.
</p>

<p>
It's common to realize things like this, it unusual to get all assumptions right initially, but when you go further into the package, you'll naturally stumble upon issues caused by your assumptions. Just make sure you remember what you know and what you assume.
</p>

<div class="org-src-container">
<pre class="src src-bash"><span class="org-keyword">if</span> [ -f <span class="org-string">"${CONF}"</span> ]; <span class="org-keyword">then</span>
    <span class="org-keyword">if</span> [ <span class="org-string">"${REF_CONF}"</span> -nt <span class="org-string">"${CONF}"</span> ]; <span class="org-keyword">then</span>
        <span class="org-comment-delimiter"># </span><span class="org-comment">There is a new version
</span>        <span class="org-keyword">if</span> [ $<span class="org-variable-name">FORCE_CONF_UPDATE</span> -ne 0 ]; <span class="org-keyword">then</span>
            <span class="org-comment-delimiter"># </span><span class="org-comment">Replace the current configuration
</span>            <span class="org-builtin">printf</span> <span class="org-string">'&#9888;&#65039;  Automaticaly update %s to the new version\n'</span> <span class="org-string">"${CONF}"</span>
            <span class="org-keyword">if</span> [ <span class="org-negation-char">!</span> -f <span class="org-string">"${OLD_CONF}"</span> ]; <span class="org-keyword">then</span>
                <span class="org-builtin">printf</span> <span class="org-string">'The previous configuration is saved to %s\n'</span> <span class="org-string">"${OLD_CONF}"</span>
                mv <span class="org-string">"${CONF}"</span> <span class="org-string">"${OLD_CONF}"</span>
            <span class="org-keyword">fi</span>
            cp <span class="org-string">"${REF_CONF}"</span> <span class="org-string">"${CONF}"</span>
            $<span class="org-variable-name">PATCH_REF_CONF</span> <span class="org-string">"${CONF}"</span>
        <span class="org-keyword">else</span>
            <span class="org-comment-delimiter"># </span><span class="org-comment">Keep the current configuration
</span>            <span class="org-builtin">printf</span> <span class="org-string">'&#9888;&#65039;  Check new version %s to make sure searx is working properly\n'</span> <span class="org-string">"${NEW_CONF}"</span>
            cp <span class="org-string">"${REF_CONF}"</span> <span class="org-string">"${NEW_CONF}"</span>
            $<span class="org-variable-name">PATCH_REF_CONF</span> <span class="org-string">"${NEW_CONF}"</span>
        <span class="org-keyword">fi</span>
    <span class="org-keyword">else</span>
        <span class="org-builtin">printf</span> <span class="org-string">'Use existing %s\n'</span> <span class="org-string">"${CONF}"</span>
    <span class="org-keyword">fi</span>
<span class="org-keyword">else</span>
    <span class="org-builtin">printf</span> <span class="org-string">'Create %s\n'</span> <span class="org-string">"${CONF}"</span>
    cp <span class="org-string">"${REF_CONF}"</span> <span class="org-string">"${CONF}"</span>
    $<span class="org-variable-name">PATCH_REF_CONF</span> <span class="org-string">"${CONF}"</span>
<span class="org-keyword">fi</span>
</pre>
</div>

<p>
When you encounter such an ugly piece of code, you don't need to understand it fully, just the general jist of it is more than enough. At a glance we see that configuration is based on a reference config and patching of it to produce a final config.
</p>

<div class="org-src-container">
<pre class="src src-shell"><span class="org-comment-delimiter"># </span><span class="org-comment">make sure there are uwsgi settings
</span>update_conf ${<span class="org-variable-name">FORCE_CONF_UPDATE</span>} <span class="org-string">"${UWSGI_SETTINGS_PATH}"</span> <span class="org-string">"/usr/local/searx/dockerfiles/uwsgi.ini"</span> <span class="org-string">"patch_uwsgi_settings"</span>

<span class="org-comment-delimiter"># </span><span class="org-comment">make sure there are searx settings
</span>update_conf <span class="org-string">"${FORCE_CONF_UPDATE}"</span> <span class="org-string">"${SEARX_SETTINGS_PATH}"</span> <span class="org-string">"/usr/local/searx/searx/settings.yml"</span> <span class="org-string">"patch_searx_settings"</span>
</pre>
</div>

<p>
Looking at the call sites, we see both the reference config file paths and the functions used for patching.
</p>

<div class="org-src-container">
<pre class="src src-shell"><span class="org-function-name">patch_uwsgi_settings</span>() {
    <span class="org-variable-name">CONF</span>=<span class="org-string">"$1"</span>

    <span class="org-comment-delimiter"># </span><span class="org-comment">Nothing
</span>}
</pre>
</div>

<p>
Interestingly the <code>uwsgi</code> config doesn't get patched, so the reference one should be fine in most cases.
</p>

<div class="org-src-container">
<pre class="src src-shell"><span class="org-keyword">exec</span> su-exec searx:searx uwsgi --master --http-socket <span class="org-string">"${BIND_ADDRESS}"</span> <span class="org-string">"${UWSGI_SETTINGS_PATH}"</span>
</pre>
</div>

<p>
And finally we see the command used to actually launch Searx.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-packaging-searx-part1-org-what-is-uwsgi" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-packaging-searx-part1-org-what-is-uwsgi"><span class="section-number-3">5.2.</span> What is <code>uwsgi</code></h3>
<div class="outline-text-3" id="text-5-2">
<p>
I once again had to look this up. But according to Wikipedia it's similar to CGI if you're familiar with that. If not then, well, it's used to allow webserver's like Nginx to serve arbitrary scripts in arbitrary languages. So <code>client -&gt; Nginx - uwsgi -&gt; Python backend</code>.
</p>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-packaging-searx-part1-org-what-is-uwsgi-aren-t-we-missing-a-full-webserver" class="outline-4">
<h4 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-packaging-searx-part1-org-what-is-uwsgi-aren-t-we-missing-a-full-webserver"><span class="section-number-4">5.2.1.</span> Aren't we missing a full webserver?</h4>
<div class="outline-text-4" id="text-5-2-1">
<blockquote>
<p>
uWSGI natively speaks HTTP, FastCGI, SCGI and its specific protocol named “uwsgi”
</p>
</blockquote>

<p>
No, uwsgi can serve as a lightweight webserver. So ideally in the NixOS module we'd support all methods, HTTP, CGI, SCGI and uwsgi, but that's something to worry about later.
</p>

<p>
Packaging
</p>

<p>
Now that we know all there is to know from the Docker image and related files, we can start writing Nix expressions. First let us create a new repository quickly, we'll first do it as a Flake, it's easier and can be easily ported to nixpkgs if done right.
</p>

<div class="org-src-container">
<pre class="src src-shell">git init searx-nix
</pre>
</div>

<div class="org-src-container">
<pre class="src src-nix">{
  <span class="org-nix-attribute">inputs.nixpkgs.url</span> = <span class="org-string">"github:NixOS/nixpkgs"</span>;

  <span class="org-nix-attribute">outputs</span> =
    {
      self,
      nixpkgs
    }:
    <span class="org-nix-keyword">let</span>
      <span class="org-nix-attribute">supportedSystems</span> = [ <span class="org-string">"x86_64-linux"</span> ];
      <span class="org-nix-attribute">forAllSystems'</span> = nixpkgs.lib.genAttrs;
      <span class="org-nix-attribute">forAllSystems</span> = forAllSystems' supportedSystems;

      <span class="org-nix-attribute">pkgsForSystem</span> =
        system:
        <span class="org-nix-builtin">import</span> nixpkgs { <span class="org-nix-keyword">inherit</span> system; };
    <span class="org-nix-keyword">in</span>
      {
        <span class="org-nix-attribute">packages</span> = forAllSystems
          (system:
            <span class="org-nix-keyword">let</span>
              <span class="org-nix-attribute">pkgs</span> = pkgsForSystem system;
            <span class="org-nix-keyword">in</span>
              {
                <span class="org-nix-attribute">default</span> = pkgs.callPackage <span class="org-nix-constant">./searx.nix</span> {};
              }
          );
      };
}
</pre>
</div>

<p>
We then create a tiny <code>flake.nix</code>, the cruft around it is generic and not really important, the important bit is <code class="src src-nix">pkgs.callPackage <span class="org-nix-constant">./searx.nix</span> {}</code>, that ensures that our actual package doesn't really care for whether it's in a flake or not.
</p>

<p>
Looking up <code>nixpkgs python</code> gets us to the nixpkgs manual (the information is both in the official one and ryatm's, but the latter is better since it isn't one huge html page) <a href="https://ryantm.github.io/nixpkgs/languages-frameworks/python/">ryatm's nixpkgs manual</a>.
</p>

<div class="org-src-container">
<pre class="src src-nix">{ lib, python3 }:

python3.pkgs.buildPythonApplication <span class="org-nix-keyword">rec</span> {
  <span class="org-nix-attribute">pname</span> = <span class="org-string">"luigi"</span>;
  <span class="org-nix-attribute">version</span> = <span class="org-string">"2.7.9"</span>;

  <span class="org-nix-attribute">src</span> = python3.pkgs.fetchPypi {
    <span class="org-nix-keyword">inherit</span> pname version;
    <span class="org-nix-attribute">sha256</span> = <span class="org-string">"035w8gqql36zlan0xjrzz9j4lh9hs0qrsgnbyw07qs7lnkvbdv9x"</span>;
  };

  <span class="org-nix-attribute">propagatedBuildInputs</span> = <span class="org-nix-keyword">with</span> python3.pkgs; [ tornado python-daemon ];

  <span class="org-nix-attribute">meta</span> = <span class="org-nix-keyword">with</span> lib; {
    ...
  };
}
</pre>
</div>

<p>
As an example we're given a derivation for luigi, I don't know and don't need to know what luigi is. It's important to ignore irrelevant information and not research it to speed up packaging.
</p>

<p>
Based on the example derivation we can build our own. Instead of <code>python3.pkgs.fetchPypi</code> we're going to use <code>fetchFromGitHub</code> as that's more universal and easier to work with.
</p>

<div class="org-src-container">
<pre class="src src-nix">{
  lib,
  python3,
  fetchFromGitHub
}:
<span class="org-nix-keyword">with</span> lib;
<span class="org-nix-keyword">let</span>
  <span class="org-nix-attribute">pname</span> = <span class="org-string">"searx"</span>;
  <span class="org-nix-attribute">version</span> = <span class="org-string">"1.0.0"</span>;
<span class="org-nix-keyword">in</span>
python3.pkgs.buildPythonApplication {
  <span class="org-nix-keyword">inherit</span> pname version;

  <span class="org-nix-attribute">src</span> = fetchFromGitHub {
    <span class="org-nix-attribute">rev</span> = version;
    <span class="org-nix-attribute">repo</span> = pname;
    <span class="org-nix-attribute">owner</span> = pname;
    <span class="org-comment"># If you update the version, you need to switch back to ~lib.fakeSha256~ and copy the new hash
</span>    <span class="org-nix-attribute">sha256</span> = <span class="org-string">"sha256-sIJ+QXwUdsRIpg6ffUS3ItQvrFy0kmtI8whaiR7qEz4="</span>; <span class="org-comment"># lib.fakeSha256;
</span>  };

  <span class="org-nix-attribute">postPatch</span> = <span class="org-string">''
    sed -i 's/==.*$//' requirements.txt
  ''</span>;

  <span class="org-comment"># tests try to connect to network
</span>  <span class="org-nix-attribute">doCheck</span> = <span class="org-nix-builtin">false</span>;

  <span class="org-nix-attribute">pythonImportsCheck</span> = [ <span class="org-string">"searx"</span> ];

  <span class="org-comment"># Since Python is weird, we need to put any dependencies we know of here
</span>  <span class="org-comment"># and not into ~buildInputs~ or ~nativeBuildInputs~ as one might expect.
</span>  <span class="org-comment"># As a starting point, just copy everything from ~requirements.txt~ and
</span>  <span class="org-comment"># hope for the best.
</span>  <span class="org-nix-attribute">propagatedBuildInputs</span> = <span class="org-nix-keyword">with</span> python3.pkgs;
    [
      certifi
      babel
      flask-babel
      flask
      jinja2
      lxml
      pygments
      python-dateutil
      pyyaml
      <span class="org-comment"># httpx[http2]
</span>      httpx
      brotli
      <span class="org-comment"># uvloop==0.16.0; python_version &gt;= '3.7'
</span>      <span class="org-comment"># uvloop==0.14.0; python_version &lt; '3.7'
</span>      uvloop
      <span class="org-comment"># httpx-socks[asyncio]
</span>      httpx-socks
      langdetect
      setproctitle

      <span class="org-comment"># sometimes the packages in ~requirements.txt~ may not be enough, so if something is missing, just add it
</span>      requests
    ];

  <span class="org-nix-attribute">meta</span> = <span class="org-nix-keyword">with</span> lib; {
    <span class="org-comment"># You'll fill this in later when upstreaming to nixpkgs
</span>  };
}
</pre>
</div>

<p>
At this point I looked at the already existing derivation, because I was qurious.
</p>

<div class="org-src-container">
<pre class="src src-nix"><span class="org-comment-delimiter"># </span><span class="org-comment">tests try to connect to network
</span><span class="org-nix-attribute">doCheck</span> = <span class="org-nix-builtin">false</span>;

<span class="org-nix-attribute">pythonImportsCheck</span> = [ <span class="org-string">"searx"</span> ];

<span class="org-nix-attribute">postPatch</span> = <span class="org-string">''
  sed -i 's/==.*$//' requirements.txt
''</span>;
</pre>
</div>

<p>
The <code class="src src-nix"><span class="org-nix-attribute">doCheck</span> = <span class="org-nix-builtin">false</span></code> is there by experimentation. I didn't know what <code class="src src-nix"><span class="org-nix-attribute">pythonImportsCheck</span> = [ <span class="org-string">"searx"</span> ]</code> does, so I looked around, I first went to <a href="https://github.com/NixOS/nixpkgs">nixpkgs</a> and clicked on <code>Go to file</code>, searched for <code>python</code> and then went to <code>pkgs/top-level/python-packages.nix</code>. Inspecting the file on line 41 I found the definition of <code>buildPythonApplication</code>.
</p>

<div class="org-src-container">
<pre class="src src-nix"><span class="org-nix-attribute">buildPythonPackage</span> = makeOverridablePythonPackage (lib.makeOverridable (callPackage <span class="org-nix-constant">../development/interpreters/python/mk-python-derivation.nix</span> {
  <span class="org-nix-keyword">inherit</span> namePrefix;     <span class="org-comment"># We want Python libraries to be named like e.g. "python3.6-${name}"
</span>  <span class="org-nix-keyword">inherit</span> toPythonModule; <span class="org-comment"># Libraries provide modules
</span>}));
</pre>
</div>

<p>
This points to a file called <code>mk-python-derivation.nix</code>, so again, <code>Go to file</code>. <a href="https://github.com/NixOS/nixpkgs/blob/nixos-22.05/pkgs/development/interpreters/python/mk-python-derivation.nix">mk-python-derivation.nix</a> tells us a lot, but still not what <code>pythonImportsCheck</code> does, it's only mentioned as <code>pythonImportsCheckHook</code>, which prompted me to look for said hook. Going to the containing directory and into <code>hooks/python-imports-check-hook.sh</code> we can satiate our curiosity.
</p>

<p>
Lastly the <code class="src src-nix"><span class="org-nix-attribute">postPatch</span> = <span class="org-string">''...''</span></code> is used to patch out the requirement version constraints, it seems to cause an error at build time.
</p>

<p>
With all these things, we get a successful build.
</p>

<p>
In the next blog post we'll start with the NixOS module by first trying to actually get a full launch of Searx. Till then!
</p>
</div>
</div>
</div>
]]></description>
</item>
<item>
<title>On Freedom, Crypto and Return Policies</title>
<link>https://redalder.org/blog/on-freedom-crypto-and-return-policies.html</link>
<author>magic_rb@redalder.org (magic_rb)</author>
<guid isPermaLink="false">https://redalder.org/blog/on-freedom-crypto-and-return-policies.html</guid>
<pubDate>Fri, 07 Jan 2022 00:00:00 +0000</pubDate>

<description><![CDATA[<p>
So after Christmas I decided to buy myself something nice. I thought about a buying a new home server. But because ideally I'd be moving to the Netherlands for university soon, I decided that it would be best to get something small and portable instead. After much pondering I settled on buying a crypto wallet. I could use it eventually to get into trading but also to replace my current janky GPG setup. Most hardware wallets support OpenPGP in some shape or form. My choices, as far as I could tell were: Trezor T, Ledger Nano S/X and that's about it.
</p>

<p>
Ledger Nano S/X
Both are great devices, except that the Nano S has this tiny flaw of not having enough memory to install multiple apps at the same time. It can be worked around by constantly installing and removing apps (you won't use your crypto), but that has a few disadvantages:
</p>

<p>
<b>cumbersome</b> - do you really want to shuffle apps around constantly?
<b>wears out flash</b> - unnecessarily abusing the flash is a bad idea, especially since it's even less durable than normal flash
<b>some apps might not even fit</b> - I can't verify this one, but I've read somewhere that some bigger apps might not even fit. And I don't mean like installing Bitcoin and Ada, but just installing Bitcoin by itself might not fit. That's bad
</p>

<p>
So after not-so-careful consideration I removed the Nano S from my list. Easy, one down, now's the final round.
</p>

<p>
Trezor T
</p>

<p>
Very good device, open source hardware and software, well supported. Some things I don't like are the form-factor, I prefer Ledger's USB stick like design, it stands out less.
</p>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-bluetooth" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-bluetooth"><span class="section-number-3">6.1.</span> Bluetooth</h3>
<div class="outline-text-3" id="text-6-1">
<p>
I really liked the idea of Bluetooth on the Ledger Nano X. I know some might be scared of the security of Bluetooth, but what makes hardware wallets like the Ledger secure is that they consider everything outside their cases to be a threat. If you wanna make a transaction you gotta first check the value and receiving account on the device itself, then you must confirm the transaction on the device. Therefore I consider Bluetooth security a mute point as even if everything was plaintext, the attacker could at most get a readonly view of what I see on my computer.
</p>

<p>
This is one of the reasons that made me lean towards the Ledger Nano X.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-screen" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-screen"><span class="section-number-3">6.2.</span> Screen</h3>
<div class="outline-text-3" id="text-6-2">
<p>
I like the Ledger's screen a lot. It's small, black and white and very readable. Just perfect. On the other hand the Trezor T has this big colorful thing that's frankly overkill. My laptop has a colorful screen, my wallet doesn't need it.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-price" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-price"><span class="section-number-3">6.3.</span> Price</h3>
<div class="outline-text-3" id="text-6-3">
<p>
The Trezor T is around 200€ and the Ledger Nano X was 120€ when I bought it. It's gone a bit up now, not that it matters to whether you should buy it though. 80€ might not seem like much for having fully open source hardware and software, but you must consider that I bought this primarily for GPG and not crypto, at least for now.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-conclusion" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-conclusion"><span class="section-number-3">6.4.</span> Conclusion</h3>
<div class="outline-text-3" id="text-6-4">
<p>
All in all, the Trezor T is a damn good device, but it also had to go from my list unfortunately.
</p>

<p>
That leaves the Nano X as the winner. Yay! for the Nano X.
</p>

<p>
Ledger Nano X
</p>

<p>
After opening the box I was mighty impressed. It looked good, had no manufacturing defects. I checked out the UI quickly, it felt snappy, navigatable and sleek. Very good first impressions. I went ahead and installed the Ledger Live Deskop program and even though it's React Native it felt very snappy and light, once again I was impressed. I went to install Bitcoin, Ada and OpenPGP on my Ledger. OwO whats this? There are 3 OpenPGP apps? Luckily I've done my research and knew, at least partially about what each did and how they were different. There are 3 apps, two of them are basically the same. I'll split them into <code>smartcard emulators</code> and <code>weird things</code>.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-smartcard-emulators" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-smartcard-emulators"><span class="section-number-3">6.5.</span> SmartCard Emulators</h3>
<div class="outline-text-3" id="text-6-5">
<p>
A smart card is a fascinating thing really. Back in the days of old, you'd use a smart card, a literal card shaped thingie as the storage for your keys or what have you. They were not restricted to OpenPGP though and so called OpenPGP smart cards are merely but one kind of card, hell even NFC tags can be interfaced with like smart cards.
</p>

<p>
This is exactly what the first 2 apps emulated on the Ledger Nano X. It reported itself as a card reader with 1 or 3 card slots (that's why 2 apps, the first one had only one slot, while the "XL" version had 3). You could then write to those cards using OpenPGP, in total 4 keys: Authorize, Sign, Encrypt + a symmetric key. I've no idea what the last was for, but you don't have to use that key if you don't want to. So in total it could store and perform cryptography with 12 keys. This is exactly the same thing that a Yubikey does, it also just emulates a smart card.
</p>

<p>
Theoretically this exactly what I want, we'll get into why reality had different plans a bit later.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-weird-things" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-weird-things"><span class="section-number-3">6.6.</span> Weird Things</h3>
<div class="outline-text-3" id="text-6-6">
<p>
The other app available acted as a non-standard USB device that someone pulled the protocol for out of their ass. In order to make it work you need a special program on your computer, which replaces the GPG daemon. You still use the <code>gpg</code> command, you're just not talking to the real GPG daemon but some <a href="https://github.com/romanz/trezor-agent">Python monstrosity</a>.
</p>

<p>
Madness.
</p>

<p>
There are two other problems which disqualified this way completely. As of now, you can't upload your own key to the app. What? What? What? The recommended way is to generate keys on the devices and then sign them with your master key. You also can't get the keys out and deriving them from the master key of the Ledger itself is experimental. So that 24 word sheet you have as the backup of your Ledger? Useless.
</p>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-weird-things-24-word-backup" class="outline-4">
<h4 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-weird-things-24-word-backup"><span class="section-number-4">6.6.1.</span> 24 Word Backup</h4>
<div class="outline-text-4" id="text-6-6-1">
<p>
I forgot to metion this, but during initial setup you are given 24 English words which directly map to your master key from which everything(except OpenPGP (: ) is derived from. That way if you lose or destroy your Ledger you only need the sheet you (hopefully) wrote them down on and a new device. It's a weak point, yes, but a necessary backup.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-weird-things-gpg-replacement" class="outline-4">
<h4 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-weird-things-gpg-replacement"><span class="section-number-4">6.6.2.</span> GPG Replacement</h4>
<div class="outline-text-4" id="text-6-6-2">
<p>
Also, replacing your GPG daemon with this weirdness means that you can't really have multiple different devices with different keys or some keys just on your computer and not on the Ledger. In order to switch back to the real GPG daemon you'd have to do <code>pkill ledger-agent &amp;&amp; gpg-agent --daemon</code>. Unacceptable in my eyes.
</p>
</div>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-first-problems" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-first-problems"><span class="section-number-3">6.7.</span> First Problems</h3>
<div class="outline-text-3" id="text-6-7">
<p>
Smart card emulation is exactly what I want. But.. it doesn't work, the app hasn't been updated for a very long time and it doesn't even build with the latest SDK. I had to duct-tape it together with my mediocre C skills to make it compile. I eventually did and I had a <code>openpgp.bin</code> on my hands. Now came the presumably easy part, flashing it onto my Ledger. It's my device so I should be able to flash it whenever I please. Very wrong assumption.
</p>

<p>
I soon learned that I can't flash it, only sideload it. I was a bit disappointed but I just calmed myself and read on. And what do you know, sideloading is <b>not</b> supported on the Nano X only the Nano S. Amazing! Not only won't Ledger fix their own fucking app, but when I decide to do the work for these idiots I can't even test what I've produced. But, I still wanted this whole Ledger thing to work out, because the prospect of Bluetooth and such a practical form factor was really damn appealing. My next pit stop was the <a href="https://github.com/LedgerHQ/speculos">speculos</a> emulator.
</p>
</div>
</div>
<div id="outline-container-file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-speculos" class="outline-3">
<h3 id="file-build-61f89klyyzgsbx2pxg5pa03p3qmqk1i5-source-blog-on-freedom-crypto-and-return-policies-org-speculos"><span class="section-number-3">6.8.</span> Speculos</h3>
<div class="outline-text-3" id="text-6-8">
<p>
Theoretically how it works is that you take the binary file you got as the result of compilation and then just emulate the Ledger. Easy right? Once again, wrong! Reality is that compiling the beast is almost impossible because what do you know, the build script fetches dependencies from the internet. One of the few cardinal sins I believe in. I eventually gave up on compiling it, went for the Docker container instead, I saw they had a <code>Dockerfile</code> in the root of the repository. So I assumed there had to be an image somewhere and unsurprisingly there was one on <a href="https://hub.docker.com/">Docker Hub</a>. Took me a while to find, because it wasn't linked anywhere so I had to go digging for it.
</p>

<p>
I downloaded it and it started up. But that's all it did. As the version I got didn't support the version of SDK I used. Interestingly alongside the binary file an <code>.elf</code> was generated, so I wonder why I got an ugly <code>Couldn't emulate syscall</code> instead of a nice <code>SDK unsupported</code> error. Fun.
</p>

<p>
I then tried to build my own image, as surely it must build right? Try to see what's wrong with this Docker compose.
</p>

<div class="org-src-container">
<pre class="src src-yaml"><span class="org-variable-name">version</span>: <span class="org-string">"3.7"</span>

<span class="org-variable-name">services</span>:
  <span class="org-variable-name">nanos</span>:
    <span class="org-variable-name">build</span>: .
    <span class="org-variable-name">image</span>: ledgerhq/speculos
    <span class="org-variable-name">volumes</span>:
      - ./apps:/speculos/apps
      <span class="org-variable-name">ports</span>:
        - <span class="org-string">"1234:1234"</span> <span class="org-comment-delimiter"># </span><span class="org-comment">gdb
</span>        - <span class="org-string">"5000:5000"</span> <span class="org-comment-delimiter"># </span><span class="org-comment">api
</span>        - <span class="org-string">"40000:40000"</span> <span class="org-comment-delimiter"># </span><span class="org-comment">apdu
</span>        - <span class="org-string">"41000:41000"</span> <span class="org-comment-delimiter"># </span><span class="org-comment">vnc
</span>        <span class="org-variable-name">command</span>: <span class="org-string">"--model nanos ./apps/btc.elf --sdk 2.0 --seed secret --display headless --apdu-port 40000 --vnc-port 41000"</span>
        <span class="org-comment-delimiter"># </span><span class="org-comment">Add `--vnc-password "&lt;password&gt;"` for macos users to use built-in vnc client.</span>
</pre>
</div>

<p>
Did you spot it? It's the <code>build: . , image: ledgerhq/speculos</code> part. It's as if they don't want people to know that the image in fact <b>does NOT build at all</b>. At this point I had enough, I spent a whole day on this piece of steaming garbage and I got exactly nowhere. I even opened and promptly closed <a href="https://github.com/LedgerHQ/openpgp-card-app/issues/72">this</a> fun and wholesome issue.
</p>
</div>
</div>
]]></description>
</item>
</channel>
</rss>
