<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.5">Jekyll</generator><link href="https://tobisyurt.net/feed.xml" rel="self" type="application/atom+xml" /><link href="https://tobisyurt.net/" rel="alternate" type="text/html" /><updated>2026-01-21T22:04:18+01:00</updated><id>https://tobisyurt.net/feed.xml</id><title type="html">tobi exploring, learning and having fun</title><subtitle>Tobi having fun in his homelab... Started with a plain old desktop pc repurposed as NAS. This got me motivated to try out more and more at home and today I consider it a hobby.
</subtitle><author><name>Toubi van Kenoubi</name></author><entry><title type="html">Reverse Tunneling</title><link href="https://tobisyurt.net/reverse-tunneling" rel="alternate" type="text/html" title="Reverse Tunneling" /><published>2026-01-06T00:00:00+01:00</published><updated>2026-01-06T00:00:00+01:00</updated><id>https://tobisyurt.net/reverse-tunneling</id><content type="html" xml:base="https://tobisyurt.net/reverse-tunneling">&lt;p&gt;I previously kept my backup NAS in the same location as my primary NAS, connected to the same power grid and without a UPS. I have now been offered a new location for the backup NAS and plan to move it there to introduce geographic redundancy.&lt;/p&gt;

&lt;p&gt;The idea is to have the backup NAS operate in pull mode. It will connect as a VPN client and replicate datasets incrementally using ZFS. In addition, I plan to deploy an old Raspberry Pi at the new location. The Pi will run 24/7 and wake the backup NAS once a week so it can start the replication process.&lt;/p&gt;

&lt;p&gt;I do not want to set up a site-to-site VPN tunnel, as I prefer not to modify the router at the new location. Instead, I plan to keep a persistent reverse tunnel from the Raspberry Pi to a jump host in my homelab for maintenance purposes. This setup will allow me to connect to the Pi via SSH, wake the backup NAS, and access the backup NAS web interface. The Raspberry Pi hardware should be more than sufficient for these lightweight maintenance tasks, while the VPN handling the heavy data transfer will run on the much more powerful backup NAS hardware.&lt;/p&gt;

&lt;p&gt;There are certainly other ways to achieve this. The reasons I chose this architecture are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It relies on tools I already know and have configured&lt;/li&gt;
  &lt;li&gt;The tools are available on almost any Linux system&lt;/li&gt;
  &lt;li&gt;I have long wanted to experiment with reverse tunnels, both out of curiosity and to better understand malicious intrusions and data exfiltration techniques&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can think of a better approach, feel free to let me know. SSH tunneling is certainly not the most performant solution, especially on a Raspberry Pi.&lt;/p&gt;

&lt;p&gt;The following diagram visualizes a simple sample setup and introduces the names that I will reuse later.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/images/reverse-tunnel-simple.svg&quot;&gt;&lt;img src=&quot;/assets/images/reverse-tunnel-simple.svg&quot; alt=&quot;simple reverse ssh tunnel&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;preparing-the-pi-and-the-reverse-jump&quot;&gt;Preparing the Pi and the Reverse Jump&lt;/h2&gt;

&lt;p&gt;Requirements:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The system should be highly isolated, as it needs to be exposed to the internet.&lt;/li&gt;
  &lt;li&gt;The SSH server running on it must be hardened.&lt;/li&gt;
  &lt;li&gt;The local firewall must be enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To meet these requirements, I placed the system in my DMZ VLAN and added a very restrictive firewall rule set. By default, it is not allowed to connect to anything: not hosts in the same subnet, not other internal subnets, and not the internet, except the Debian repos to update.&lt;/p&gt;

&lt;p&gt;In addition, I hardened the SSH server on both the Raspberry Pi and the Reverse Jump as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;AllowUsers user-reverse-jump user-pi
PermitRootLogin no
PasswordAuthentication no
VersionAddendum none
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;For testing, I initiated the reverse tunnel on the Raspberry Pi using: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh -p 231 -R 2222:localhost:22 user-reverse-jump@reverse-jump&lt;/code&gt;.
I then connected to it from the Reverse Jump host with: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh -p 2222 user-pi@localhost&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;tunnel-http-traffic-through&quot;&gt;Tunnel http traffic through&lt;/h2&gt;

&lt;p&gt;To access the web interface of my backup NAS, I also need to proxy web requests from the Reverse Jump through the tunnel, effectively allowing me to browse as if I were on the Raspberry Pi. To achieve this, I start a SOCKS&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; v5 proxy on the Pi and expose it through the tunnel.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/images/reverse-tunnel-socks-proxy.svg&quot;&gt;&lt;img src=&quot;/assets/images/reverse-tunnel-socks-proxy.svg&quot; alt=&quot;socks v5 proxy&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I did to make this work:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Started the SOCKS proxy on the Pi: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh -N -D 1080 localhost&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;Exposed the proxy through the tunnel by running the following on the Pi: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh -p 231 -R 1080:localhost:1080 toob@10.0.0.220&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;On the Reverse Jump, I run a lightweight desktop environment with Firefox configured to use the SOCKS proxy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;finalizing-the-setup&quot;&gt;Finalizing the Setup&lt;/h2&gt;

&lt;h3 id=&quot;raspberry-pi&quot;&gt;Raspberry Pi&lt;/h3&gt;

&lt;p&gt;On the Raspberry Pi side, I needed to make everything persistent and ensure it starts automatically on boot. It also needed to reconnect automatically in case of network outages. While looking for a solution, I came across autossh, which describes itself in its man page as follows:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;autossh is a program to start a copy of ssh and monitor it, restarting it as necessary should it die or stop passing traffic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;autossh -M 0 -N -p 231 \
-R 2222:localhost:22 \
-R 1080:localhost:1080 \
user-reverse-jump@reverse-jump
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Using systemd units, I configured everything to start at boot. This makes the setup easy to restart, redeploy, or move to a new location, and it ensures that I always regain a reverse shell from that network as long as outbound internet access is permitted.&lt;/p&gt;

&lt;p&gt;/etc/systemd/system/autossh-reverse.service&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[Unit]
Description=AutoSSH reverse SSH + reverse SOCKS
After=network-online.target ssh-socks.service
Wants=network-online.target ssh-socks.service

[Service]
User=YOUR_LOCAL_USERNAME
Environment=&quot;AUTOSSH_GATETIME=0&quot;
ExecStart=/usr/bin/autossh \
  -M 0 -N \
  -o ServerAliveInterval=30 \
  -o ServerAliveCountMax=3 \
  -o ExitOnForwardFailure=yes \
  -p 231 \
  -R 2222:localhost:22 \
  -R 1080:localhost:1080 \
  toob@10.0.9.102
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;/etc/systemd/system/ssh-socks.service&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[Unit]
Description=Local SSH SOCKS proxy
After=network.target

[Service]
User=YOUR_LOCAL_USERNAME
ExecStart=/usr/bin/ssh -N -D 127.0.0.1:1080 localhost
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;using-the-reverse-jump-as-a-proxy&quot;&gt;Using the Reverse Jump as a Proxy&lt;/h3&gt;

&lt;p&gt;To allow other machines on the same network as the Reverse Jump to connect to the tunnel (SSH and SOCKS), I needed to make a few small adjustments.&lt;/p&gt;

&lt;p&gt;On the Reverse Jump, I updated the SSH daemon configuration as follows:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;AllowTcpForwarding yes
GatewayPorts yes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href=&quot;/assets/images/reverse-tunnel-final-setup.svg&quot;&gt;&lt;img src=&quot;/assets/images/reverse-tunnel-final-setup.svg&quot; alt=&quot;final setup&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;hr data-content=&quot;footnotes&quot; /&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot;&gt;
      &lt;p&gt;A SOCKS server is a proxy that forwards network traffic at the transport layer, allowing applications to route their connections through another host without being aware of the underlying network topology. Its main benefits are flexibility and protocol independence, making it useful for securely accessing services across restricted or segmented networks. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html">I previously kept my backup NAS in the same location as my primary NAS, connected to the same power grid and without a UPS. I have now been offered a new location for the backup NAS and plan to move it there to introduce geographic redundancy.</summary></entry><entry><title type="html">Autopsy On Debian</title><link href="https://tobisyurt.net/autopsy-on-debian" rel="alternate" type="text/html" title="Autopsy On Debian" /><published>2025-09-18T00:00:00+02:00</published><updated>2025-09-18T00:00:00+02:00</updated><id>https://tobisyurt.net/autopsy-on-debian</id><content type="html" xml:base="https://tobisyurt.net/autopsy-on-debian">&lt;p&gt;&lt;a href=&quot;/assets/images/autopsy-logo.svg&quot;&gt;&lt;img src=&quot;/assets/images/autopsy-logo.svg&quot; alt=&quot;Autopsy&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am happily running Debian on my personal laptop and want to run the newest Autopsy version on it with
its nice new GUI. Unfortunately there is only a nice Windows installer and a Snap package. Even though I am no fan of
Canonical’s Snap’s I quickly tried it, but it did not work. So I decided to build and install it myself. I did
this before the release of Debian 13 (Trixie), but also after the upgrade to Trixie I did it one more time.
Because the dependency SleuthKit has a dependancy to a Debian Java 17 package it got slightly more complex.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Clone the source code: &lt;a href=&quot;https://github.com/sleuthkit/autopsy.git&quot;&gt;https://github.com/sleuthkit/autopsy.git&lt;/a&gt;
or download the bundled release you like; &lt;a href=&quot;https://github.com/sleuthkit/autopsy/releases/tag/autopsy-4.22.1&quot;&gt;https://github.com/sleuthkit/autopsy/releases/tag/autopsy-4.22.1&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Run the script: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unix_setup.sh -j /opt/java17&lt;/code&gt; and do what it reports…, which is:
    &lt;ol&gt;
      &lt;li&gt;Install or provide java 17. In Debian 12 the default java version was 17, but in 13 it is Java 21, that is why I installed Java 21 manually. See here; &lt;a href=&quot;https://adoptium.net/temurin/releases/?variant=openjdk17&amp;amp;version=17&amp;amp;os=any&amp;amp;arch=any&quot;&gt;https://adoptium.net/temurin/releases/?variant=openjdk17&amp;amp;version=17&amp;amp;os=any&amp;amp;arch=any&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;Download the sleuthkit java library, because it is not in the debian repos. One will only find the sleuthkit in the debian repos, which is the c++ CLI tool, but not what autopsy needs… I did it with there provided .deb package here: &lt;a href=&quot;https://github.com/sleuthkit/sleuthkit/releases/download/sleuthkit-4.14.0/sleuthkit-java_4.14.0-1_amd64.deb&quot;&gt;https://github.com/sleuthkit/sleuthkit/releases/download/sleuthkit-4.14.0/sleuthkit-java_4.14.0-1_amd64.deb&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a id=&quot;step3&quot;&gt;&lt;/a&gt; Install it; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo apt install /opt/sleuthkit/sleuthkit-java_4.14.0-1_amd64.deb&lt;/code&gt;. Unfortunately following occured:
        &lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; sudo apt install /opt/sleuthkit/sleuthkit-java_4.14.0-1_amd64.deb 

 Error! Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: Unsatisfied dependencies: sleuthkit-java : Depends: openjdk-17-jre but it is not installable Error: Unable to correct problems, you have held broken packages. Error: The following information from --solver 3.0 may provide additional context: 
 Unable to satisfy dependencies. Reached two conflicting decisions: 
 1. sleuthkit-java:amd64=4.14.0-1 is selected for install 
 2. sleuthkit-java:amd64 Depends openjdk-17-jre but none of the choices are installable: [no choices]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;I already installed a java 17 version, so we just need the install to move on. For this problem i used the equivs tool to mimic a Debian package (maybe there is a cleaner solution?):
        &lt;ol&gt;
          &lt;li&gt;Install and generate the build file:
            &lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; sudo apt install equivs
 equivs-control openjdk17-jre
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;            &lt;/div&gt;
          &lt;/li&gt;
          &lt;li&gt;Modify the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;openjdk17.jre&lt;/code&gt; as follows:
            &lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; Package: openjdk-17-jre
 Version: 17.0
 Provides: openjdk-17-jre
 Description: Dummy package to satisfy sleuthkit-java dep
  This is a fake package because I already have JDK 17 installed manually.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;            &lt;/div&gt;
          &lt;/li&gt;
          &lt;li&gt;Install it &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;equivs-build openjdk17-jre &amp;amp;&amp;amp; sudo apt install ./openjdk-17-jre_17.0_all.deb&lt;/code&gt;&lt;/li&gt;
        &lt;/ol&gt;
      &lt;/li&gt;
      &lt;li&gt;After this the install worked, see &lt;a href=&quot;#step3&quot;&gt;3.&lt;/a&gt;&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;Rerun &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unix_setup.sh -j /opt/java17&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Run Autopsy (from the directory you cloned unpacked); &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bin/autopsy --jdkhome /opt/java/jdk-17.0.12+7&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That should be it.&lt;/p&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;security&quot;, &quot;forensic&quot;]" /><summary type="html"></summary></entry><entry><title type="html">Podman Reloaded</title><link href="https://tobisyurt.net/podman-reloaded" rel="alternate" type="text/html" title="Podman Reloaded" /><published>2025-01-27T00:00:00+01:00</published><updated>2025-01-27T00:00:00+01:00</updated><id>https://tobisyurt.net/podman-reloaded</id><content type="html" xml:base="https://tobisyurt.net/podman-reloaded">&lt;p&gt;&lt;a href=&quot;/assets/images/podman.svg&quot;&gt;&lt;img src=&quot;/assets/images/podman.svg&quot; alt=&quot;Podman&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In an earlier post, I wrote about Podman. See also &lt;a href=&quot;/podman&quot;&gt;Podman&lt;/a&gt;. Now, I wanted
to migrate some applications to Podman and received a deprecation warning with the command:
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;podman generate systemd --new --files --name test&lt;/code&gt;.
It is suggested to adapt to Quadlet, which provides a nicer workflow. So I did, but it took some reading to 
figure it out, which is why I’m summarizing it here.&lt;/p&gt;

&lt;p&gt;This change offers a clean and elegant solution, seamlessly integrating Podman with systemd. While it took 
me some time as a user to get everything running smoothly again, the effort was absolutely worth it.&lt;/p&gt;

&lt;p&gt;This is another blog I found helpful: &lt;a href=&quot;https://mo8it.com/blog/quadlet/&quot;&gt;https://mo8it.com/blog/quadlet/&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;quadlet&quot;&gt;Quadlet&lt;/h2&gt;
&lt;p&gt;Quadlet is integrated into Podman and allows users to create systemd services in a declarative way. You 
can find more details here:&lt;br /&gt;
&lt;a href=&quot;https://www.redhat.com/en/blog/quadlet-podman&quot;&gt;https://www.redhat.com/en/blog/quadlet-podman&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I placed the quadlet files in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.config/containers/systemd/&lt;/code&gt;. And a simple app can look like that:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;# some-app.container
[Container]
AutoUpdate=registry
ContainerName=some-app
Environment=TZ=Europe/Zurich
Image=lscr.io/linuxserver/some-app:latest
Pod=some-app-stack.pod
PodmanArgs=--tty
Volume=some-app_data:/config

[Service]
Restart=always

[Install]
WantedBy=default.target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With following command one can verify how the generated files look like:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;/usr/libexec/podman/quadlet -dryrun -user
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;quadlet-generator[4645]: Loading source unit file /home/toob/.config/containers/systemd/some-app.container
---some-app.service---
# some-app.container
[X-Container]
AutoUpdate=registry
ContainerName=some-app
Environment=TZ=Europe/Zurich
Image=lscr.io/linuxserver/some-app:latest
Pod=some-app-stack.pod
PodmanArgs=--tty
Volume=some-app_data:/config

[Service]
Restart=always
Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name some-app --cidfile=%t/%N.cid --replace --rm --cgroups=split --sdnotify=conmon -d -v some-app_data:/config --label io.containers.autoupdate=registry --env TZ=Europe/Zurich --pod-id-file %t/some-app-stack-pod.pod-id --tty lscr.io/linuxserver/some-app:latest

[Install]
WantedBy=default.target

[Unit]
Wants=podman-user-wait-network-online.service
After=podman-user-wait-network-online.service
SourcePath=/home/toob/.config/containers/systemd/some-app.container
RequiresMountsFor=%t/containers
BindsTo=some-app-stack-pod.service
After=some-app-stack-pod.service
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;podlet&quot;&gt;Podlet&lt;/h3&gt;

&lt;p&gt;Podlet helps to generate such files. One can find it here on github;
&lt;a href=&quot;https://github.com/containers/podlet?tab=readme-ov-file&quot;&gt;https://github.com/containers/podlet?tab=readme-ov-file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It supports many different modes, including generating from already running resources, which is the closest replacement
for the old podman generate command. However, it also supports interesting features like generating from Docker Compose files.&lt;/p&gt;

&lt;p&gt;Here’s an example of how I used it to generate a Quadlet file from an existing pod:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;podlet generate pod some-app-stack
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I think the functionality of the generate option hasn’t received much attention. It only supports a small subset of creation 
options and may fail. In my case, I had to recreate almost everything manually.&lt;/p&gt;

&lt;h3 id=&quot;troubleshooting&quot;&gt;Troubleshooting&lt;/h3&gt;
&lt;p&gt;Initially, I encountered problems with the AutoUpdate=registry setting for my Grafana instance. Since I want it to 
auto-update for easier maintenance, I included this argument in my quadlet file. However, with this setting, Grafana 
failed to start. When I removed the argument, it started without any issues.&lt;/p&gt;

&lt;p&gt;Upon checking the service logs, I noticed a warning about the image reference, indicating that it should be a fully qualified 
domain name (FQDN).&lt;/p&gt;

&lt;p&gt;To resolve this, I updated my image reference from Image=grafana/grafana:latest to Image=docker.io/grafana/grafana:latest. 
After this change, everything worked as expected. In hindsight, this makes sense because Podman does not assume Docker Hub as 
the default registry. Still, it was a bit of a headache to figure out.&lt;/p&gt;

&lt;h2 id=&quot;auto-updates-and-rollbacks&quot;&gt;Auto-Updates and Rollbacks&lt;/h2&gt;

&lt;p&gt;The auto-updates and rollbacks continue to function as before. You can refer to my previous post for more details.&lt;/p&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html"></summary></entry><entry><title type="html">Minecraft Server</title><link href="https://tobisyurt.net/minecraft-server" rel="alternate" type="text/html" title="Minecraft Server" /><published>2024-03-23T00:00:00+01:00</published><updated>2024-03-23T00:00:00+01:00</updated><id>https://tobisyurt.net/minecraft-server</id><content type="html" xml:base="https://tobisyurt.net/minecraft-server">&lt;h2 id=&quot;hosting-a-minecraft-server&quot;&gt;Hosting a Minecraft Server&lt;/h2&gt;

&lt;p&gt;I am a fan of Debian, so that is the base of the server.&lt;/p&gt;

&lt;h3 id=&quot;fabric-loader&quot;&gt;Fabric loader&lt;/h3&gt;

&lt;p&gt;I was following the official &lt;a href=&quot;https://fabricmc.net/use/server/&quot;&gt;fabric documentation&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Install curl&lt;/li&gt;
  &lt;li&gt;Install java 17 &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;apt install openjdk-17-jre-headless&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Choose version you want…&lt;/li&gt;
  &lt;li&gt;Download &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl -OJ https://meta.fabricmc.net/v2/versions/loader/1.20.4/0.15.7/1.0.0/server/jar&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;run the jar once, so that all necessary files and folders get generated&lt;/li&gt;
  &lt;li&gt;change eula.txt, server.properties&lt;/li&gt;
  &lt;li&gt;copy fabric API jar and mods in place&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;make-it-a-service&quot;&gt;Make it a service&lt;/h3&gt;

&lt;p&gt;With screen you can start and interact with the server. For more information see &lt;a href=&quot;https://www.gnu.org/software/screen/manual/screen.html#toc-Getting-Started-1&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Service File&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[Unit]
Description=Minecraft Server: %i
After=network.target

[Service]
WorkingDirectory=/opt/%i

User=minecraft
Group=minecraft

Restart=always

ExecStart=/usr/bin/screen -DmS mc-%i /usr/bin/java -Xmx6G -jar fabric-server-mc.1.20.4-loader.0.15.7-launcher.1.0.0.jar nogui

ExecStop=/usr/bin/screen -p 0 -S mc-%i -X eval &apos;stuff &quot;say SERVER SHUTTING DOWN IN 20 SECONDS.&quot;\015&apos;
ExecStop=/bin/sleep 1
ExecStop=/usr/bin/screen -p 0 -S mc-%i -X eval &apos;stuff &quot;save-all&quot;\015&apos;
ExecStop=/usr/bin/screen -p 0 -S mc-%i -X eval &apos;stuff &quot;say  SAVED EVERYTHING, STOP IN 10 SECONDS...&quot;\015&apos;
ExecStop=/bin/sleep 10
ExecStop=/usr/bin/screen -p 0 -S mc-%i -X eval &apos;stuff &quot;stop&quot;\015&apos;


[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Enable the service: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;systemctl enable minecraft@hermitcraft&lt;/code&gt;.
Start the service: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;systemctl stop minecraft@hermitcraft&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;See defined WorkingDirectory in service unit. in this working directory i have servers in folder like hermitcraft. Like that i have to write only one service file.&lt;/p&gt;

&lt;h3 id=&quot;make-sure-everything-gets-saved&quot;&gt;Make sure everything gets saved&lt;/h3&gt;

&lt;p&gt;One could propably achieve this via systemd as well, but I found it easier like that…&lt;/p&gt;

&lt;p&gt;Place a shutdown script in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/init.d/&lt;/code&gt; which takes care of saving the world before a shutdown or a reboot.
It only needs to stop the service and give it some time to do its job.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;#! /bin/sh

systemctl stop minecraft@hermitcraft

# Wait for the service to be stopped
while systemctl is-active --quiet &quot;$service_name&quot;; do
     sleep 5
     done

exit 0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Create soft links to the proper runlevel (for shutdown and reboot, 0 and 6); &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/rc0.d/&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/rc6.d/&lt;/code&gt;&lt;/p&gt;

&lt;h3 id=&quot;backup&quot;&gt;Backup&lt;/h3&gt;

&lt;p&gt;A minecraft server by default saves all maps to disc every 5 minutes. See answer from some AI:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In Minecraft, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;save-all&lt;/code&gt; command is used to manually trigger the server to save the current state of the world to disk. However, by default, Minecraft servers also have an auto-save feature enabled, which automatically saves the world at regular intervals. 
The frequency of these auto-saves can be configured in the server settings. By default, Minecraft servers save the world every 6000 ticks, which is approximately every 5 minutes. This auto-save feature helps prevent data loss in case of unexpected server crashes or shutdowns.
However, server administrators may still use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;save-all&lt;/code&gt; command manually, especially before making significant changes to the world or performing maintenance tasks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This means that i can take zfs snapshots of the entire lxc and handle it with the hypervisor.&lt;/p&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html">Hosting a Minecraft Server</summary></entry><entry><title type="html">K8s Going Deeper</title><link href="https://tobisyurt.net/k8s-going-deeper" rel="alternate" type="text/html" title="K8s Going Deeper" /><published>2023-10-28T00:00:00+02:00</published><updated>2023-10-28T00:00:00+02:00</updated><id>https://tobisyurt.net/k8s-going-deeper</id><content type="html" xml:base="https://tobisyurt.net/k8s-going-deeper">&lt;p&gt;In following posts, I started with my k8s cluster at home:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;/k8s&quot;&gt;k8s getting ready&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;/helm&quot;&gt;helm&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post I want to go a little deeper and try out some things on my k8s cluster.
I don’t want it to just work, I also want to see it fail and do its thing do fix it,
like throwing tons of requests on it with jmeter or bringing down entire nodes etc.&lt;/p&gt;

&lt;h2 id=&quot;autoscaling&quot;&gt;Autoscaling&lt;/h2&gt;

&lt;h3 id=&quot;horizontalpodautoscaler&quot;&gt;HorizontalPodAutoscaler&lt;/h3&gt;
&lt;p&gt;This time I started reading the documentation&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, which is not always what I do first… 
There one can read up that the HorizontalPodAutoscaler(short HPA) acts on following formula:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The HPA needs a metric-server, which may not be there if you set up your k8s cluster by
yourself. I installed a metric server as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git clone https://github.com/kubernetes-incubator/metrics-server.git
cd charts/metrics-server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There you find the helm chart for the metric server. The requirements for it are described
in the README. At the follwoing point in the reqirements, I took the easy way:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Kubelet certificate needs to be signed by cluster Certificate Authority (or disable certificate validation by passing –kubelet-insecure-tls to Metrics Server)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;values.yml&lt;/code&gt; I added the last point under defaultArgs:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;defaultArgs:
  - --cert-dir=/tmp
  - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  - --kubelet-use-node-status-port
  - --metric-resolution=15s
  - --kubelet-insecure-tls
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then install it with:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm install metrics-server .
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I will use a special spring-boot-app, which I quickly made for this purpose.
It only has a GET request with a parameter &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;count&lt;/code&gt;. This requests performs
a for loop as big as the number &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;count&lt;/code&gt;, and fills an ArrayList to also
fill up the memory.&lt;/p&gt;

&lt;p&gt;You can find it &lt;a href=&quot;https://hub.docker.com/r/toubivankenoubi/load&quot; target=&quot;_blank&quot;&gt;here on docker hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With the metrics-server installed one can check the metrics with kubectl
(here in a certain namespace). This can be useful for the next tests with auto-scaling
enabled.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl top pod -n comments
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I changed my helm chart as following to enable autoscaling:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;autoscaling:
  enabled: true
  minReplicas: 1
  maxReplicas: 3
  targetCPUUtilizationPercentage: 80
  targetMemoryUtilizationPercentage: 80
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And one also needs to specify the resources the pods should request, otherwise
the HPA does not know any limits…&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;resources:
  requests:
    cpu: 500m
    memory: 1Gi
  limits:
    cpu: 200m
    memory: 1Gi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;m&lt;/code&gt; stands for milli cores of the CPU, so &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;100m&lt;/code&gt; means &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0.1 x 1 core&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With these settings the HPA should spawn new replicas as soon as it exceeds the targets.&lt;/p&gt;

&lt;p&gt;First it did not work as I expected, so I had to calculate it myself with the above formula.
I got the current metrics as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl top pod -n comments
NAME                                    CPU(cores)   MEMORY(bytes)
comments-test-77c9c54498-zr8ns          1m           132Mi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So I had one replica and want the system to spawn at least one more. That is what
I get with the formula at the moment for memory:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;1*(132/ (256*0.55)) = 0.9374999999999999
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With these settings with a request of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;count=10000000&lt;/code&gt;, 
I could easily provoke another replicas and after a certain time it scaled down again, prefect…&lt;/p&gt;

&lt;h3 id=&quot;cluster-autoscaler&quot;&gt;Cluster autoscaler&lt;/h3&gt;

&lt;p&gt;TODO, if node fails…. test it…&lt;/p&gt;

&lt;h2 id=&quot;next&quot;&gt;Next;&lt;/h2&gt;
&lt;h3 id=&quot;auto-deployment&quot;&gt;auto deployment&lt;/h3&gt;
&lt;p&gt;argocd, blue-green / canary deployment?&lt;/p&gt;
&lt;h3 id=&quot;helm-charts-repo&quot;&gt;helm charts repo&lt;/h3&gt;
&lt;p&gt;simpler webserver irgendwie siehe doku.&lt;/p&gt;
&lt;h3 id=&quot;stateful-sets&quot;&gt;Stateful Sets&lt;/h3&gt;
&lt;p&gt;For example: Galera Cluster, Sharded MongoDb or Kafka&lt;/p&gt;

&lt;hr data-content=&quot;footnotes&quot; /&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot;&gt;
      &lt;p&gt;k8s autoscale: &lt;a href=&quot;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/&quot; target=&quot;_blank&quot;&gt;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html">In following posts, I started with my k8s cluster at home:</summary></entry><entry><title type="html">Home Automation</title><link href="https://tobisyurt.net/home-automation" rel="alternate" type="text/html" title="Home Automation" /><published>2023-09-10T00:00:00+02:00</published><updated>2023-09-10T00:00:00+02:00</updated><id>https://tobisyurt.net/home-automation</id><content type="html" xml:base="https://tobisyurt.net/home-automation">&lt;p&gt;Approximately a year ago I started to use Node-RED a little, see following post; 
&lt;a href=&quot;/home-energy&quot;&gt;Home Energy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Recently I found a new use case. I am still a little old school and rather watch “Live TV” over 
dbv-c. I prefer it that way, because it is almost free, and I can still schedule recordings etc. 
In Switzerland, it is also in discussion to make ads not shippable anymore in the internet
tc offerings, which is one more reason to record the shows I want to watch myself.&lt;/p&gt;

&lt;p&gt;I am running a “TV-Headend Server”(handling my dvb-c connection) on my Home Theater PC(short HTPC)
, but I don’t want to run it 24/7. That is where node-red comes in to place.&lt;/p&gt;

&lt;h2 id=&quot;tv-recordings-with-smart-turn-onoff&quot;&gt;TV recordings with smart turn on/off&lt;/h2&gt;

&lt;p&gt;The requirements:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The HTPC should not run all the time&lt;/li&gt;
  &lt;li&gt;The HTPC should turn on only long enough to record the wanted program on live tv.&lt;/li&gt;
  &lt;li&gt;The HTPC should only turn off, if we are not already watching something in parallel (Kodi).&lt;/li&gt;
  &lt;li&gt;Turning it on should not interfere with current watching.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;node-red&quot;&gt;Node-RED&lt;/h3&gt;

&lt;h4 id=&quot;turning-it-on&quot;&gt;Turning it on&lt;/h4&gt;
&lt;p&gt;I was already using the Wake on LAN nodes to wake up my devices over the dashboard,
which is a very simple way to accomplish it. It’s also no problem if it is already on.
If one uses Node-RED as a docker container, make sure the devices are reachable by adding
this; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;network_mode: host&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/images/node_red2.png&quot;&gt;&lt;img src=&quot;/assets/images/node_red2.png&quot; alt=&quot;Node-RED WoL&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I used the “Inject” node to schedule it. There are more complex timers available, but I 
preferred the simplest, which is already in the basic Node-RED installation. One may also wonder
why I wake two devices, because my HTPC has two network interfaces and, I never know which one is
plugged in :D&lt;/p&gt;

&lt;h4 id=&quot;turning-it-off&quot;&gt;Turning it OFF&lt;/h4&gt;

&lt;p&gt;That is the slightly more complex task, because it was not trivial to me how to detect, if
someone is already watching something. I found the node ssh-v3, which let you run any command
you would be able to tun over ssh. The relevant services are always started and there is 
no indicator on the HTPC to check if it’s just recording something or someone is watching a movie
in parallel.&lt;/p&gt;

&lt;p&gt;Then I remembered that I am monitoring the power of the whole entertainment system. The power 
consumption is the perfect indicator, because if only the HTPC is running, it consumes about 25 W
and if the TV and the audio receiver are also running, it is somewhere between 100 and 150 W.
&lt;a href=&quot;/home-energy&quot;&gt;Here&lt;/a&gt; cou can read up how I managed to install these
power measurements.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/images/node_red3.png&quot;&gt;&lt;img src=&quot;/assets/images/node_red3.png&quot; alt=&quot;Node-RED ssh&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me try to explain a little. I start the flow with an “Inject”. In this case I have two of
them, because on Tuesdays I want to record something additional. These first nodes inject a
variable in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;msg.command&lt;/code&gt;, which is already the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;shutdown now&lt;/code&gt; instruction for the ssh connection
later. In the next Influx DB node I get the latest value of the power measurement with:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;SELECT last(&quot;value&quot;) FROM &quot;power_sens2&quot; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then in the next connected function node I put the logic like that:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;if (msg.payload[0].last &amp;gt; 50){
    msg.payload = &apos;uptime&apos;;
}else{
    msg.payload = msg.command;
}

return msg;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Which should speak for itself. I replace the shutdown command by something innocent(&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;uptime&lt;/code&gt;),
but there may be better solutions which just interrupt the flow there.&lt;/p&gt;

&lt;p&gt;Finally, it goes to the ssh node. And for debug reasons I added two debug nodes
(one for the payload and one for the session of the ssh connection).&lt;/p&gt;

&lt;h3 id=&quot;tv-headend--kodi&quot;&gt;TV-Headend / Kodi&lt;/h3&gt;

&lt;h4 id=&quot;map-recordings-to-nas&quot;&gt;Map recordings to NAS&lt;/h4&gt;

&lt;p&gt;My HTPC has limited resources, so I wondered if I can somehow mount a nfs-share from my NAS.
I am using LibreELEC and their documentation provides good instructions, see
&lt;a href=&quot;https://wiki.libreelec.tv/how-to/mount_network_share&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The only thing I changed was to set &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Before=kodi.service&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Before=service.tvheadend42.service&lt;/code&gt;,
because in my case it turned out to be more stable, otherwise the recording were only partially
loaded. Following my entire service file:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;[Unit]
Description=nfs share for tv recordings
Requires=network-online.service
After=network-online.service
Before=service.tvheadend42.service

[Mount]
What=10.32.1.5:/mnt/tralala/kodi_recordings
Where=/storage/recordings
Options=
Type=nfs

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;h4 id=&quot;schedule-the-recording-in-tv-headend&quot;&gt;Schedule the recording in TV-Headend&lt;/h4&gt;

&lt;p&gt;I simply created a timer in TV-Headend, which can be done from the Kodi interface or over
the web frontend f TV-Headend. It might be easier to use the web frontend..&lt;/p&gt;

&lt;p&gt;Like that I covered all my requirements and enjoy the setup ever since :D&lt;/p&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html">Approximately a year ago I started to use Node-RED a little, see following post; Home Energy.</summary></entry><entry><title type="html">Ansible</title><link href="https://tobisyurt.net/ansible" rel="alternate" type="text/html" title="Ansible" /><published>2023-08-01T00:00:00+02:00</published><updated>2023-08-01T00:00:00+02:00</updated><id>https://tobisyurt.net/ansible</id><content type="html" xml:base="https://tobisyurt.net/ansible">&lt;p&gt;&lt;a href=&quot;/assets/images/ansible-logo.png&quot;&gt;&lt;img src=&quot;/assets/images/ansible-logo.png&quot; alt=&quot;Ansible&quot; width=&quot;178&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&quot;ansible-in-general&quot;&gt;Ansible in general&lt;/h2&gt;

&lt;p&gt;I introduced Ansible at work as well as in my home and came to appreciate it a lot.
If hosts are set up with ansible it is easy to replicate and keep track of all instances. At the same
time I consider it documented, if set up purely with ansible.&lt;/p&gt;

&lt;p&gt;To get started I can recommend this &lt;a href=&quot;https://www.ansiblefordevops.com/&quot; target=&quot;_blank&quot;&gt;book&lt;/a&gt;. 
It is a perfect starting point and if you want to go deeper just use the 
official ansible &lt;a href=&quot;https://docs.ansible.com/&quot; target=&quot;_blank&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;run-ansible-tasks-in-parallel&quot;&gt;Run ansible tasks in parallel&lt;/h2&gt;

&lt;p&gt;I am scheduling some lxc container with awx and ansible on my proxmox node. Now I added
a new K8s development cluster, which consumes a lot of power, while running, so I also want 
to make sure, that this is shutdown most of the time.&lt;/p&gt;

&lt;p&gt;With the containers it was easy with simple tasks, because they are started and stopped quickly.&lt;/p&gt;

&lt;p&gt;Following my playbook for containers, which is fast enough:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;- name: Start/stop containers
  hosts: proxmox
  become: yes

  vars:
  ids:
  - &apos;202&apos;
  - &apos;203&apos;
  - &apos;204&apos;
  - &apos;206&apos;
  - &apos;207&apos;
  state: started

  tasks:
    - name: &quot; LXC&apos;s&quot;
      proxmox:
      api_host: 10.0.0.3
      api_user: root@pam
      api_password: !vault |
      $ANSIBLE_VAULT;1.1;AES256
      62623164336537346264373264386431356534636162343439393734303233386437656365623161
      3035376339316363353764626566343832653834656638650a353166326630366563363362396633
          NOT A REAL HASH, EVEN WITH SECRET OF NO USE
      3735663235646638360a653961366538383036663035666134303231346562323732306334373965
      30323435616630306639396335386133326431663066333539616636393466653764
      node: pve
      vmid: &quot;&quot;
      state: &quot;&quot;
      loop: &quot;&quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The K8s cluster is composed of virtual machines, which also mounts nfs-shares, and they take some 
time shutting down. That is why I want to run the commands in parallel.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;- name: Start/stop K8s related vms
  hosts: proxmox
  become: yes

  vars:
    ids:
      - &apos;301&apos;
      - &apos;310&apos;
      - &apos;311&apos;
      - &apos;312&apos;
    state: started

  tasks:
    - name: &quot; vm&apos;s&quot;
      proxmox_kvm:
        api_host: 10.0.0.3
        api_user: root@pam
        api_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          62623164336537346264373264386431356534636162343439393734303233386437656365623161
          3035376339316363353764626566343832653834656638650a353166326630366563363362396633
          NOT A REAL HASH, EVEN WITH SECRET OF NO USE
          3735663235646638360a653961366538383036663035666134303231346562323732306334373965
          30323435616630306639396335386133326431663066333539616636393466653764
        node: pve
        vmid: &quot;&quot;
        state: &quot;&quot;
        timeout: 30
        force: true
      async: 175 # proxmox forcefully terminates a vm after 120 secondsi
      poll: 0 # moves on to the next task immediately without checking back (concurency)
      loop: &quot;&quot;
      register: result

    - name: Check async tasks status
      async_status:
        jid: &quot;&quot;
        loop: &quot;&quot;
        loop_control:
        loop_var: &quot;async_result_item&quot;
      register: async_poll_results
      until: async_poll_results.finished
      retries: 200
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The option &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;poll: 0&lt;/code&gt; makes sure to not check if a task was successful and ansible goes immediately to
the next task. The task with the module async_status allows me to check back on them in this case.&lt;/p&gt;

&lt;p&gt;Like that one has a nice mechanic to parallelize long-running tasks.&lt;/p&gt;

&lt;h3 id=&quot;snapshots-on-proxmox&quot;&gt;Snapshots on proxmox&lt;/h3&gt;

&lt;p&gt;For some experiments, with multiple vm’s (4 for the k8s cluster), I had
to quickly take snapshots at the same time. That is how I do it over ansible,
 respectively with awx:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_snap_module.html&quot;&gt;https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_snap_module.html&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;awx&quot;&gt;AWX&lt;/h2&gt;

&lt;p&gt;I use an older version of AWX because I decided to not run it on a K8s cluster, because I only start the cluster
on certain occasions to save energy. There is an easy install possibility with docker-compose on a docker
instance see &lt;a href=&quot;https://github.com/ansible/awx/blob/17.0.1/INSTALL.md#docker-compose&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you have a properly set up ansible project it is super simple to automated it
in AWX. As soon as multiple users deploy scripts on servers it gets a lot easier
to track, in what state the servers are.&lt;/p&gt;

&lt;p&gt;More to come…&lt;/p&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html">Ansible in general</summary></entry><entry><title type="html">Helm</title><link href="https://tobisyurt.net/helm" rel="alternate" type="text/html" title="Helm" /><published>2023-07-30T00:00:00+02:00</published><updated>2023-07-30T00:00:00+02:00</updated><id>https://tobisyurt.net/helm</id><content type="html" xml:base="https://tobisyurt.net/helm">&lt;p&gt;&lt;a href=&quot;/assets/images/helm-logo.png&quot;&gt;&lt;img src=&quot;/assets/images/helm-logo.png&quot; alt=&quot;Helm&quot; width=&quot;178&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;create-a-helm-chart&quot;&gt;Create a helm-chart&lt;/h2&gt;

&lt;p&gt;I will use my publicly available comments app to build a helm-chart. Get the default scaffolding
from helm with:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm create comments
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;What I changed to begin with:&lt;/p&gt;

&lt;p&gt;In Charts.yaml I changed a few:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;appVersion: “53” (so that it matches the tag of the image I want to deploy)&lt;/li&gt;
  &lt;li&gt;name: comments&lt;/li&gt;
  &lt;li&gt;Description: A Helm chart for the comments stack&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;in-valuesyaml&quot;&gt;In values.yaml:&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;replicaCount: 3 (because I have 3 worker nodes on my cluster)&lt;/li&gt;
  &lt;li&gt;image.repository: toubivankenoubi/comments (my image is available on docker hub)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Furthermore, some snippets from the values.yaml with a little explanation:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;service:
  type: ClusterIP
  port: 8080
  
ingress:
  enabled: true
  className: &quot;nginx&quot;
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: &quot;true&quot;
  hosts:
    - host: k8s-node-1.tobisyurt.local
      paths:
        - path: /comments
          pathType: Prefix

    - host: k8s-node-2.tobisyurt.local
      paths:
        - path: /comments
          pathType: Prefix
    - host: k8s-node-3.tobisyurt.local
      paths:
        - path: /comments
          pathType: Prefix
    - host: tobisyurt.net
      paths:
        - path: /comments
          pathType: Prefix
  tls: []
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I only changed the port of the clusterIp service, because I was used to 8080.&lt;/p&gt;

&lt;p&gt;In the Ingress I added a lot of hosts. The important one is tobisyurt.net which is the public facing.
Because I want to be able to test it locally too, I added my internal(private network) DNS entries for
the 3 k8s nodes.&lt;/p&gt;

&lt;h3 id=&quot;helm-dependencies&quot;&gt;helm dependencies&lt;/h3&gt;

&lt;p&gt;The comments app is dependent on a mongodb, so we add that to the dependencies in the
Chart.yaml:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;...

dependencies:
- name: mongodb
  version: &quot;12.15.4&quot;
  repository: &quot;https://registry-2.docker.io/bitnamicharts&quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you deploy this chart as is, it would deploy the subchart mongodb with all its defaults.
Normally you need to override some of these values, so that your pods talk to each other properly.&lt;/p&gt;

&lt;p&gt;In the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;values.yml&lt;/code&gt; in your main chart you can override values of your subcharts as follows:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;... other values of your chart ...

mongodb:
  persistence:
    size: 499Mi
  auth:
    rootPassword: &quot;a password...&quot;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;using-the-helm-chart&quot;&gt;Using the helm chart&lt;/h3&gt;
&lt;p&gt;Deployed with:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm install comments-test ./comments -n comments
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Uninstalled with:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm uninstall comments-test -n comments
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Test it without deploying it on the cluster (dry run):&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm install --debug --dry-run comments-test ./comments
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Upgrade to the latest chart release:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm upgrade comments-test . -n comments
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html"></summary></entry><entry><title type="html">K8s</title><link href="https://tobisyurt.net/k8s" rel="alternate" type="text/html" title="K8s" /><published>2023-06-25T00:00:00+02:00</published><updated>2023-06-25T00:00:00+02:00</updated><id>https://tobisyurt.net/k8s</id><content type="html" xml:base="https://tobisyurt.net/k8s">&lt;p&gt;&lt;a href=&quot;/assets/images/k8s-logo.png&quot;&gt;&lt;img src=&quot;/assets/images/k8s-logo.png&quot; alt=&quot;k8s&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I decided to install a kubernetes (short K8s) cluster in my homelab. Just to be able to mess around
with it as I please. By setting it up myself I also learn the architecture of it, its benefits ,
as well as its limitations or where cloud providers make your life easier for certain costs…&lt;/p&gt;

&lt;p&gt;Because I only fnd time in the evenings it took me quite some time, but one can break it down
in several milestones, which are manageable in short time slots. This way It was always fun!&lt;/p&gt;

&lt;h2 id=&quot;some-useful-commands&quot;&gt;Some useful commands&lt;/h2&gt;

&lt;p&gt;To delete something previously deployed by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl apply some-file.yml&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl delete -f test-replica-set.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;persistent-volumes&quot;&gt;Persistent Volumes&lt;/h2&gt;

&lt;p&gt;I will provide persistent volumes by my TrueNas server with NFS.
I used the official k8s documentation and some other blog as reference&lt;sup id=&quot;fnref:2&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Add nfs subdir external provisioner:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

kubectl create ns nfs-provisioner

NFS_SERVER=192.168.x.y
NFS_EXPORT_PATH=/mnt/sysdataset/k8s-nfs-1

helm -n  nfs-provisioner install nfs-provisioner-01 nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFS_SERVER \
    --set nfs.path=$NFS_EXPORT_PATH \
    --set storageClass.defaultClass=true \
    --set replicaCount=1 \
    --set storageClass.name=nfs-01 \
   --set storageClass.provisionerName=nfs-provisioner-01  
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Theoretically that should work by now. But as always it did not -.-&lt;/p&gt;

&lt;h3 id=&quot;troubleshooting&quot;&gt;Troubleshooting&lt;/h3&gt;

&lt;p&gt;Firstly the pods could not mount the volumes, because nfs-common was missing on each worker node.
After installing that it could mount the volumes.&lt;/p&gt;

&lt;p&gt;The next problem was then container related. I tried to install mongodb and my worker node
vm missed a cpu instruction (The default one was missing some newer ones). So I changed my
processor type of the worker nodes in proxmox to “host”. Finally all worked as expected.&lt;/p&gt;

&lt;h2 id=&quot;ingress&quot;&gt;ingress&lt;/h2&gt;
&lt;p&gt;The ingress controller is mapped to the nodeport range because of the bare metal setup.
One can change that, if it is important and the nodes should expose ports 80 and 443 directly.
See here in the documentation: &lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service&quot; target=&quot;_blank&quot;&gt;https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;80:32355/TCP http&lt;/li&gt;
  &lt;li&gt;443:32748/TCP https&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My router resolves to host names of my vms like hostname.tobisyurt.local. So in my case I can
access ingress on node 1 like that: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://k8s-node-1.tobisyurt.home:32355/comments/&lt;/code&gt;
For me that is good enough for testing purposes. Especially because I will use an external
load balancer, so the port numbers do not matter anyways.&lt;/p&gt;

&lt;h2 id=&quot;external-load-balancer&quot;&gt;External Load Balancer&lt;/h2&gt;

&lt;p&gt;I simply used my nginx reverse proxy / waf, which I already use for all other stuff to route traffic
to the cluster.&lt;/p&gt;

&lt;p&gt;I will recommend reading it up on the nginx documentation, but that is how I set it up:&lt;/p&gt;

&lt;p&gt;I defined the cluster in the nginx.conf&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;upstream k8s {
        server k8s-node-1.tobisyurt.home:32355;
        server k8s-node-2.tobisyurt.home:32355;
        server k8s-node-3.tobisyurt.home:32355;
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The default load balancing is Round Robin, which is what I want in this case, because it is a stateless
application. If the app is stateful, they do not share all information (for example; the deployment
does not have a shared session storage) you can set ‘ip_hash’. Normally you would also want to configure a proper health check.&lt;/p&gt;

&lt;p&gt;In the vhost config it looks as following:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;location /example {
#       proxy_set_header Host k8s-node-x.tobisyurt.home;
        include snippets/proxy-params.conf;
        proxy_pass http://k8s;
        access_log /var/log/nginx/k8s-test.access.log json_analytics;
        error_log /var/log/nginx/k8s-test.error.log info;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It is important to make sure the Host header corresponds to the host in the ingress configuration.
Otherwise, the requests will not get forwarded to the right apps in your k8s cluster.&lt;/p&gt;

&lt;h2 id=&quot;pull-images-from-private-registries&quot;&gt;Pull images from private registries&lt;/h2&gt;

&lt;p&gt;You need to create a k8s secret for the private registry to enable your k8s cluster to pull
images from it&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. Here an example how to generate a secret named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;regcred&lt;/code&gt; for a private
registry:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl create secret docker-registry regcred --docker-server=&amp;lt;your-registry-server&amp;gt; --docker-username=&amp;lt;your-name&amp;gt; --docker-password=&amp;lt;your-pword&amp;gt; --docker-email=&amp;lt;your-email&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;hr data-content=&quot;footnotes&quot; /&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot;&gt;
      &lt;p&gt;Configure NFS as Kubernetes Persistent Volume Storage: &lt;a href=&quot;https://computingforgeeks.com/configure-nfs-as-kubernetes-persistent-volume-storage/&quot; target=&quot;_blank&quot;&gt;https://computingforgeeks.com/configure-nfs-as-kubernetes-persistent-volume-storage/&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot;&gt;
      &lt;p&gt;k8s documentation: &lt;a href=&quot;https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/&quot; target=&quot;_blank&quot;&gt;https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html"></summary></entry><entry><title type="html">Backup Strategy follow-up</title><link href="https://tobisyurt.net/backup-strategy-follow-up" rel="alternate" type="text/html" title="Backup Strategy follow-up" /><published>2023-02-25T00:00:00+01:00</published><updated>2023-02-25T00:00:00+01:00</updated><id>https://tobisyurt.net/backup-strategy-follow-up</id><content type="html" xml:base="https://tobisyurt.net/backup-strategy-follow-up">&lt;p&gt;With my Backup-Nas, and other small changes the backup strategy changed a little.&lt;/p&gt;

&lt;p&gt;These are my previous posts for reference:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;/backup-strategy&quot;&gt;Backup Strategy&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;/failover-scenario&quot;&gt;Failover Scenario&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With my previous strategy I had one incident, which led me to download around 4 TB of movies again. Which took
some time and some movies my system could hardly find. That is why I decided that with the newly built 
Backup-Nas, I would also back up all my media.&lt;/p&gt;

&lt;p&gt;Therefore, I decided to improve my storage in two steps:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Put the media on a ZFS RAIDZ1&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; Pool, so that I can also benefit from the self-healing abilities of ZFS&lt;/li&gt;
  &lt;li&gt;Backup everything on the Backup-Nas, also on RAIDZ1.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;improvement-1---raidz1&quot;&gt;Improvement 1 - RAIDZ1&lt;/h2&gt;

&lt;p&gt;For that to be power efficient I invested in 3 18 TB drives, which leaves me with about 33 TB of usable
space in a RADIZ1 array. They consume about 5 W each in standby, wich was acceptable. All the other drives
I would use for the Backup-Nas media pool.&lt;/p&gt;

&lt;h2 id=&quot;improvement-2---backup&quot;&gt;Improvement 2 - Backup&lt;/h2&gt;

&lt;p&gt;Unfortunately I needed lots of small drives to match the big capacity of the media pool of my Main-Nas.
So I asked around and finally filled up my Backup-Nas with a lot of old 4 and 3 TB drives.
In the end I had the following pools ready:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Pool 1 = (4 * 4 TB in RAIDZ1) + (5 * 3 TB in RAIDZ1)&lt;/li&gt;
  &lt;li&gt;Pool 2 = 2 * 10 TB in a mirror&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In total about 32 TB.&lt;/p&gt;

&lt;p&gt;Like that I was able to hit approximately the same capacity as my Main-Nas for the media. It does consume
some power, but it is only running about 2 times an hour a week to synchronize.&lt;/p&gt;

&lt;p&gt;The rest of the backup strategy I didn’t change.&lt;/p&gt;

&lt;hr data-content=&quot;footnotes&quot; /&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot;&gt;
      &lt;p&gt;RAID-Level Summary: &lt;a href=&quot;https://en.wikipedia.org/wiki/ZFS&quot; target=&quot;_blank&quot;&gt;https://en.wikipedia.org/wiki/ZFS&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name>Toubi van Kenoubi</name></author><category term="[&quot;homelab&quot;]" /><summary type="html">With my Backup-Nas, and other small changes the backup strategy changed a little.</summary></entry></feed>