Welcome to my wiki, which is mainly a blog-thing.

RSS Atom Add a new post titled:
SDF Initial Experiences

So, I now have a prevalidated (for now) account over at the Super Dimension Fortress (SDF). I'll most likely validate it later, but not right now. In this post, I'm going to go over my initial experiences and my thoughts about them.

Initial Signup Process

Anyone can start using SDF by making a free account, and there's multiple ways of doing so. There's a simple web form on the SDF website to do it. Other methods include connecting to new@sdf.org using either SSH or Telnet (either works)

I used the SSH method - though my initial choice of when to try proved to be wrong, as the one time it didn't time out it disconnected me very early in the process. Anyway, when SSHing to the system you'll get a series of prompts that ask you questions about your new account. There's the obvious ones about username/password, but it also asks you for some details (name, email, and zip code) and isn't clear if this is required or expected (I don't think it is) or how the information entered will be used and displayed. As such, I only entered my name but am happy to enter an email if needed (I do not have a ZIP code, though. Not American, you see?)

If you exit this user account creation process (two methods are saying N to the final 'are you sure' about it or trying to use an already-taken name) you get dumped into a basic command-line menu. The menu's prompt is 'FEP Command' and while this seems inconsequential, it is in fact a reference to a completely different type of computer than the UNIX and Windows-based systems that are currently dominant. In the 1980s and early 1990s, there were companies that made Lisp Machines, which were and still are very different to modern systems running a UNIX-like or Windows. On these systems, there was a Lisp processor, but in many systems the Lisp CPU was unable to boot the system standalone, so there was an additional processor (or for the later ones, something specially integrated with the main processor) to bootstrap the Lisp one with the rest of the system. Symbolics systems, in particular, had a Front End Processor that perfomed this task (as well as other things like handling full system crashes). SDF is not a Lisp Machine, it has no actual FEP. However, this restricted initial commandline serves similar purposes.

  • It acts as a minimal 'frontend' to the actual SDF boxen.
  • It's used to create new accounts to the system, 'bootstrapping' your access :)

It's a nice reference or play on words, isn't it?

On the System

As a prevalidated user, you don't get access to a actual shell, instead you get their custom restricted shell, called 'psh'. It has various limitations which will quickly become apparent, especially if you use a very nicely customised zsh environment. A minor detail I noticed is that both 'emacs' and 'nano' end up presenting the same editor: UW Pico 4.10. However, even though the shell is limited you still have the Gopher client and limited usage of lynx. There's also the bulletin board and chat systems, both of which I believe are unique to SDF and sdf-eu. While the FEP prompt had a 'software' command, there's also one in psh and surprisingly the results are very different to the ones you get from the same FEP command (much newer, for example). At the time of writing I haven't heavily used either Gopher, the BBS or the chat system, but hopefully that will change (and this post will be on my new SDF gopherhole too).

Using eselect-repository with Gentoo

Background

Some time after installing Gentoo, I moved away from syncing the portage tree via Git, because having history of the repository is nice and as a bonus, the repository can be synchronized from GitHub over HTTPS while rsync uses plain text. Commits in the repository are also GPG-signed, but the current version of portage does not use this (edit: by default, I rechecked the dates and 2.1.24 should have it) to validate the repository (yet)

However, the official Gentoo package repository is not enough, and even though I use Bedrock Linux, it's nice to have packages designed for Gentoo so I use a number of unofficial repositories or overlays to add more packages and the conventional tool to use is Layman which has a decent-enough interface and support for a wide variety of methods to sync an overlay - all of the commonly-known OSS version control systems and some others. These overlays are updated at the same time as the official repository via portage (which calls down to layman to do the actual sync).

When I switched to syncing the tree via Git, I used a repository that had pre-generated all the additional information not directly contained in the actual 'gentoo' repository like metadata and other files. This has the advantage of being simpler to use, but as a consequence the repository's history and disk-usage is bloated because it has to track the entire history of not only the ebuilds and related files, but also the easily-reproducible metadata and files taken from other repos - and recently I found a package that resolves the problem with ease-of-use so I decided to try it out (I had other reasons, but that's for later)

At the same time, I thought that since most of my used overlays were using git, wouldn't it make sense to sync them the same way, rather than going through layman? It'd certainly be simpler, a quick look showed that there was a package that provided a good interface for this, eselect-repository that also was much simpler than layman, being under 500 total lines in a file compared to layman's roughly 10kloc (estimated by cloc after removing autoconf and doc). Any non-git overlays would stay managed by layman, though.

Doing the move

(Note: Some parts left out or tweaked for the purposes of this post.)

  1. I added the mv overlay to add the package with the scripts, and then installed both it and eselect-repository with emerge.
  2. Ensure /etc/portage/repos.conf/gentoo.conflooks like this (change the clone-depth if you don't want a full clone):

     [DEFAULT]
     main-repo = gentoo
    
     [gentoo]
     location = /usr/portage
     sync-type = git
     sync-uri = https://github.com/gentoo/gentoo.git
     auto-sync = yes
     clone-depth = 0
    
  3. Tweak /etc/eselect/repository.conf so it uses the original sources, rather than the ones with metadata in the history.

  4. Make a note of your layman-managed git overlays somewhere to re-add them shortly afterwards.
  5. Remove (or disable) all the git overlays using layman (-d or -D)
  6. Re-add the overlays using eselect repository enable for overlays on the main list, and add for those not
  7. Run your first sync command. Ignore the messages about non-existent directories, they'll go away as each overlay is created when synced.

Unrelated Benefits

While this section isn't related to either eselect-repository or layman, it is related to the changes and why I did them. Essentially, after finding out about a certain overlay which had a large number of packages, I noticed that eix-sync was taking a good amount of time to update the database for this large overlay - looking at another, I noticed that I never generated the metadata cache for any of my overlays. While googling about this, I found the complete set of shell scripts to update this for all overlays and repositories, along with other things (and also found a post pointing out the wastefulness of storing all the metadata in git for the Gentoo repository). The end-result was this post, and the generated metadata cache, while taking up-front CPU time, makes the eix updates faster overall and hopefully will also provide a nice speed boost for portage in general.

Next?

While there are not many next steps, the most obvious one is to add support in portage for the sync methods that Layman provides but Portage itself does not. Even though I'm not an expert in Python, this shouldn't be difficult to implement.

(Addendum: Someone in fact already wrote a module for Mercurial syncing, which is one of the two non-git overlay types I use. You can find it on Gentoo's bugzilla and apply it as a user patch, it works perfectly fine in my experience.)

Guix-based ikiwiki

This site very obviously uses ikiwiki both as a static wiki compiler but also for the few dynamic elements present. It's written in perl, and has a large number of Perl module dependencies. Nearly all of these do not care about the versions of much else, so they'll just work after you install them. Unfortunately, I discovered the Ikiwiki's image functionality uses the PerlMagick module, which does indeed depend on ImageMagick - in fact, it depended on ImageMagick 6.x. It wasn't in Alpine's package repositories for my version (they only had 7.x ) and installing it by hand did not work, failing with an error. I even tried replacing the distro's ImageMagick with my own that included perl support, but to no avail.

Enter Nix (and Guix)

At this time I had previously encountered Nix and Guix, both of which offer a substantially different approach to package management compared with conventional systems (such as regular distro package managers but others as well). One of their offered features is isolation: You can use a package (or a version of a package) without that extending to other packages or the system as a whole. There's other useful features that drop out of their model but this is the key one in this context.

Nix

Nix is the older, more well-known and expansive one so I tried that first. It's not perfect - I'm not really much a fan about the Nix derivation language which has been made from whole cloth solely for the project's purposes. This was not a dealbreaker, given that as a user I would hopefully not need to touch this at all. It passed the first test of actually having an ikiwiki package, but it failed the second as it would not install and on closer inspection it was marked as 'broken' due to test failures. There was no immmediate fix in sight. I decided to look my other options, starting with Guix.

Guix

The main (and maybe only?) good alternative to Nix is GNU's Guix. The primary difference is that instead of Nix's external DSL, Guix uses an internal DSL in GNU's own Guile language. I personally prefer this as it's nice to read and comes with the bonus of having a regular programming language at your fingertips when you need it. Additionally, Guix's NixOS-equivalent results in the system, init system and package manager having the same, unified configuration language - Guile. The system's defined in an .scm file, packages are defined by Guix's .scm files, services are defined by .scm files provided by Guix and read by Shepherd. Conincidentally, right at the moment I was looking into Guix a patch containing an ikiwiki package had been posted on their mailing list. I decided to wait for it to get merged and then try it out. Unlike the Nix version, it actually installed! It even almost ran (although there were some missing dependencies. More on that shortly!) and generated my wiki successfully. No imagemagick problems because it linked against it's own copy, stored under a long name in /gnu/store.

It was not perfect however, as I discovered that the more dynamic portions of the site (such as ikiwiki.cgi) complaining about not being able to find perl modules that should've been installed as dependencies. I tried to add packages and tweak the configuration to point it at different locations, but I could not make it find what I needed. I sought help from the #guix channel on Freenode, but it took a while and luckily after I explained the problem and pointed the person helping (they even happened to be the original submitter), they had made a patch to fix the problem. A few tweaks later and all the dynamic parts were working again, even though I had to add some packages to Guix via their nice command-line importer and some editing in Emacs.

Shortly after everything was working, I decided to move my modified ikiwiki and related packages out of modified Guix source tree, into their own proper namespace. After doing this, I setup a mirror of this namespace and repository to a public git server instance. Whatever packages I'm using but haven't finished or tried to upstream yet can be found here, on Ahti's Gogs.

Onward?

This sequence of events has lead me to reconsider using the blob of Perl with many dependencies that is ikiwiki. I've looked for alternatives but didn't find any that combine the concepts of blogs and wiki in such nice ways. The closest is DWiki, but that has substantially less features vs ikiwiki so I decided not to try it out. (Although it's still a nice piece of Python software). Due to this lack of replacements, I've been wanting to replace ikiwiki with something not written in Perl, and likely written by me. Conincidentally, I've wanted to learn more about the Common Lisp programming language (and the Lisp family in general), and this provides a very nice opportunity to learn the language and then use it to solve a desire that I have.

Propellor and a Raspberry Pi 3

I've given a number of configuration management tools a whirl - Puppet, Chef, Salt, Ansible. Chef and Puppet started nice, but seemed complex to use given my small needs. Salt and Ansible had potential and features unique to either, but I've never liked using YAML syntax, and this showed.

Recently I've looked into Propellor, a configuration management system written in Haskell with the interface being a DSL in the same language. I've never been a Haskeller, indeed I'm an unrepentant fan of Smalltalk's beautiful OO simplicity - so why?

  1. The provided DSL looked interesting and readable. More readable than YAML at any rate (and likely the DSLs of the other two)
  2. Despite being written in and using Haskell, it claimed that knowledge of it was not a prerequisite.
  3. The 'getting started' process wasn't complex. The most difficult part was waiting for GHC and dependencies to be compiled (Gentoo.)
  4. Unlike with Chef or Puppet, there was no fancy layered complexity of the tool itself. No fancy 'structures' like Librarian, R10K and Berkshelf for Chef. No large servers requiring gigabytes of RAM all the time.

I got started, there were and remain some niggles and quirks on the road - for example my VPS needed special bootstrapping attention (interestingly the problem was fixed upstream the same day I worked around it) Soon enough two of three boxen of interest had a no-op configuration applied, indicating that the program was working correctly for future usage. But what happened to that third one? It's a long story.

This third box was my Raspberry Pi 3 running Arch Linux ARM.

My attempts

First I tried just installing GHC. Didn't work, linking errors (later I found there were missing static libraries) Then I tried to get a known-good compiler via Stack...before realizing it didn't have ARM binaries. It also didn't work well with the previously mentioned compiler. I then tried installing an additional package that contained the static libraries. That almost worked...but I later found it to be incomplete Oh well.

Getting weirder and crazier...

My next plan was very weird: Taking the Debian packaged binaries and running them on my system. This worked when in my home directory....but magically stopped working when installed system-wide (??) The one that actually succeeded (the most) was simpler: Grab a generic binary release tarball and extract it into /usr/local. This actually worked, no linking errors woo!

Close, but no cigar..yet

This meant the compiler was actually working, but when it came to install the software, the newer compiler meant I had to use a newer snapshot of packages. This brought dependency problems, which could be solved by compiling and installing another program. I did what was needed, and the program was compiling. The speed was slow, much slower than the my 'hello world' tests.

And then, in the middle, it hung. It still answered ping requests, but I couldn't type or SSH in. I tried rebooting, with the intent of trying again either the same method or differently. Unfortunately, the system didn't come up on wifi afterwards. Nor did it come up when connected via Ethernet.

What was wrong? Plugging in a HDMI cable told me that, and it was 'fsck: UNEXPECTED INCONSISTENCY. Run fsck manually.' I couldn't properly handle this because the receiver for my USB keyboard was godknowswhere. I need to handle this offline, which means digging out my laptop and using it's sdcard slot. I think I'll take the opportunity to downgrade the compiler while I'm at it. Maybe pre-compile some software if the emulation isn't too bad.

update

I never got around to finishing the previous post, but I've decided to start a combination blog/wiki thing using ikiwiki. This wiki/blog isn't the best, but the plugins are useful and everything works.

I've imported all old posts, and hope to periodically add new blog posts and likely other types too.

How I almost sucessfully installed Gentoo Linux

I'm not a distro-hopper by any means, but even so I have tested/tasted a number of Linux distributions. Primarily, these have been in the Debian family: X/K/Ubuntu, Debian itself, Raspbian and likely more that I'm forgetting. I was recommended and used very happily for a good while Arch Linux (experiments with GNU Guix on it notwithstanding), until my hard drive began dying one day. I had heard that Tumbleweed was also rolling-release, and provided interesting rollback functionality out of the box using BTRFS and Snapper, so I installed it on a spare USB stick. Recently, I was thinking about Gentoo Linux. I was mainly thinking of the perhaps not-entirely-accurate idea that it would take substantial time to install due to the requisite amounts of compiling I also thought that the difficulty level of the installation was roughly equivalent to that of Arch,. I wanted to see if my thoughts/perceptions were right, so I planned to install and migrate to Gentoo. This led to a sequence of events that can be divided into approximately 3 parts.

Part I: The actual installation of Gentoo

Much like Arch Linux, Gentoo has a comprehensive wiki filled with documentation about not only the installation procedures but also a large number of other things that are needed post-install. This is a very good thing, because documentation is very useful when installing either distribution (especially if you haven't done it before). As such, I mostly ended up following the Gentoo Handbook which provides a well-written resource much like Arch's own installation guide (except it seemed more organized and structured into steps). Seeing as I was going to install Gentoo onto an existing filesystem (as a BTRFS subvolume) and was installing from an existing Linux rather than a CD, I could ignore 3 segments of the first part. The remaining installation steps looked like this:

  1. Download (and extract) a precompiled base system (a stage3 tarball) This stage was very easy, only a couple of commands to execute with no decisions to make.
  2. Set appropriate compiliation settings At this point I needed to select what compiliation flags I would be using as well as decide how many parallel jobs make should be running. I decided to go with the default set of flags, only tweaking it to target GCC towards my specific CPU type (-march=amdfam10) and to also follow the recommendation for job count so that make could run up to 5 tasks in parallel. This was a very good decision - for one thing it made sure that compiling felt very fast and also ensured that all of my CPU's capacity could be used by the process if needed.
  3. Enter the installed base system and configure/update Portage (Gentoo's package manager) This step was also rather easy, a bit of copying files around and a few commands. I selected the generic 'desktop' profile, not seeing one more accurate.
  4. Rebuild the world Now that I had selected my profile, I needed to update my system to include the changed settings/flags that came with the new profile. Additionally, I needed to install the additional software selected to my profile. In short, what I (or Gentoo's portage) actually did could be succinctly explained with this image:

COMPILE ALL THE THINGS

I expected that this would be the longest part of the installation, and that was a correct expectation. Compiling 164 packages does take some time. However, it didn't take as much time as I imagined it to, things felt pretty fast actually. Building a generic linux kernel from scratch and installing it only took ~1h. I attribute this unexpected speediness to the benefits of passing -j5 to make - Allowing 4 files to be compiled at once while using an entire CPU core speeded things up very nicely, while a 5th task meant there was almost always something to do when it was otherwise idle. 5. Configuration of USE flags/locale/timezone At the present time, I decided to not really touch the USE flags immediately as they could be easily modified later as an when I needed to. I set the locale & timezone in accordance with my physical location (the UK). 6. Compiling and installation of the kernel I decided that rather than start with a custom kernel configuration that may or may not boot, I would instead start with Genkernel, which would provide me a base from which to customise my own kernel. Considering that the result was a rather generic kernel, it was a bit surprising that it only took an hour or so to compile and install the kernel from scratch. 7. General system configuration In this stage, I wrote /etc/fstab as well as configuring the network (simply automatically running DHCP on the only ethernet interface). I also assigned the system a hostname, and made sure that OpenRC used the correct keymap and started the network at boot-time. Before moving on to bootloader configuration, I selected what initial optional services I wanted installed and running at boot. These included a system logger, a cron daemon as well as mlocate.

The next stage was bootloader configuration, but I think discussion of that would fit better in Part II. This post is getting somewhat long, so that'll be in another post in a short while.

Managing dotfiles with vcsh and mr

Over time, a Linux user may customize and configure his environment rather substantially. These modifications are stored in a collection of configuration files / data known as 'dotfiles' (because the first letter of many of them is '.'). For multiple reasons, it is very beneificial if you track, control and synchronise all of your personal dotfiles, a few example reasons include: - Having an additional backup - Being able to see their history, how they changed over time - Being able to rollback changes if needed (I haven't needed this yet) - Being able to use the same set of files accross multiple physical/virtual machines - Being able to share you configuration with the world so people can learn from it just like you learn from other people's.

However, there is not one single universal method for managing them, instead there are many tools and approaches that one can take. GitHub provides a decent list of programs here but I intend to summarize the main approaches below. (It may be worth noting that while the methods may not be mutually exclusive, there is one 'main' approach/method per tool and that is what counts.)

  1. Symlink-driven management involves moving the dotfiles away from their original location, instead creating symbolic links to one or more destinations. There are many ways/approaches of doing this, but the simplest is to just have a single directory be the destination for all the links.
  2. VC(Version Control)-driven management involves less management of actual dotfiles compared to the other two. Instead of copying or using symbolic links, instead a version-control system is primarily used to track/manage dotfiles in groups. The original dotfiles are left in place, instead they can be treated just like every other repository. There are multiple methods of implementing this approach with their own unique advantages and drawbacks.
  3. Configuration-driven management involves using explicit configuration file(s) to determine exactly what dotfiles are to be managed/tracked as well as how they are to be tracked among other things. The key difference between this method and the others is that rather than using interactive commands to manage and modify dotfiles, one or more configuration files are used. Typical formats for this information include YAML/JSON or a purpose-built configuration format. Typically but not exclusively uses symbolic links for dotfiles.

I have been tracking my dotfiles for short-to-moderate period of time. I originally started when I read an article about using GNU Stow as the management tool. Stow has some features that make it just as useful for this as a dedicated too: It supports 'packages' so you can choose to install part of the dotfiles. It also doesn't make you specify specifically which files to symlink, it just symlinks the entire package. However, it's definitely not perfect: Symlinks can be overwritten, Moving dotfiles and replicating directory structures sucked, and you could only manage operations from the right directory. (I could also only easily have 1 VCS repo, which effectively meant private dotfiles couldn't be tracked)

One day, while inspecting my ~/dotfiles I noticed that the .git directory was missing. I could've seen this as a disaster, but I didn't. I had been thinking about migrating away from Stow for a while, but I never actually did anything - so I took this opportunity. After some reading/googling, I had made the decision to use mr and vcsh. vcsh would provide each individual repository, public and private while mr would be used for higher-level tasks. There are multiple guides to this pair of tools, such as:

When I was migrating, I particularly found the latter link to be rather useful due to the detailed explanations of multiple common tasks. However, should you not want to read any of the above links I will attempt to give an overview of how it works in practice.

Creating a new repository

  1. Clone/Initialize the local vcsh repository
  2. Update the myrepos(mr) configuration to include that repository
  3. Add the wanted stuff to the vcsh repository
  4. Write/generate a .gitignore and modify as needed
  5. Commit to the vcsh repository and push both sets of changes as needed.

Updating an existing repository

  1. You can prefix git operations with vcsh and then the repo name to perform them on the repository.
  2. Alternatively, use 'vcsh enter' to go into an environment where git can be used normally.

Updating all the repositories

  1. Use mr up and let myrepos do the job it was designed to do.

Bootstrapping the dotfiles

(presuming git is installed. If not, install it.)

  1. Install myrepos and vcsh. If there's no distribution package, a manual install is easy (they're just standalone scripts)
  2. Obtain your myrepos configuration.
  3. Use mr up and let myrepos obtain all your repositories as needed.

If you think the above workflow looks interesting, I recommend you have a nice read of the above links - especially the last one as I found it very useful. However, I have not moved my entire collection of dotfiles over and yet I still have some interesting problems/caveats to tackle.

Firstly, while using a (private) Git repository to track my SSH/GPG data is useful, certain files have special filesystem permissions which Git does not preserve. While this can be solved with a chmod or two, it may grow to be more difficult if I need more of these files in the future - plus I might be able to automate it using mr's 'fixups' functionality.

Secondly, this is more of an observation than a problem: I'm currently using an Apache-style configuration involving both 'available.d' and 'config.d'. This works and is flexible, but it'd be simpler to only have a single directory and have equivalent information be part of the configuration itself.

Thirdly, bootstrapping from a completely clean slate is rather complicated. Certain repositories may depend on others to work / be in the correct location. Then there's the problem of access to private repositories, perhaps HTTP(s) could be used to download SSH keys using pre-entered cached credentials? A similar but lesser problem exists with GPG. Public repositories have no issues with this - if need be, they can have the master remote be changed afterwards.s

Anyway, that's all for now. If and when I solve the above issues I'll make sure to explain and blog about each my solutions. Until then, I don't expect this to come up again.

Moving a Raspberry Pi 3 from Berryboot to just plain Raspbian

For a while now, I've had a Raspberry Pi 3, replacing my original Pi. The Pi, inside of my chosen case looks something like the below image

Raspberry PI 3 with Pibow case

Originally, I thought that it'd be cool to be able to install/uninstall/update multiple distros on it. NOOBS can do this but I believe you can't do it while retaining existing data whenever you add/remove OSes / distributions. Instead, I became aware of (and chose) Berryboot instead. It provided a decent boot menu to select what you wanted to boot from while enabling you to add/remove new items without affecting existed installed ones. It did this by not giving each item it's own partition - instead, it stored the initial download as a filesystem image and used AUFS to persist any user-made changes to the downloaded system.

As time passed, I never actually used this functionality - my Pi 3 always booted Raspbian, I never bothered to even install anything else, never mind use/boot it. I continued to use Berryboot, even if I didn't really need it and would do just fine with a simple plain Raspbian install because it caused no issues (that I noticed anyway).

One day, the time came to reboot my Pi. I had done this multiple times before without any issues. However, on this attempt all I got after the reboot was that 4-pixel rainbow screen stuck there. I did some googling / research on this problem led to me to this GitHub issue. It says that after upgrading the installed OS, a reboot may cause the exact same symptoms that I saw.

I had two options:

  • Replace the problem Berryboot files with copies from the installed OS.
  • Somehow get rid of Berryboot and boot Raspbian directly...while preserving the exact state and data of my install of Raspbian. I chose the second option, reasoning that it'd be simpler and possibly more performant too (NO use of AUFS, just direct writes to the underlying media.)

Now that I had chosen to remove Berryboot, I had to face the problem of migrating all my data/configuration. Since all my modified data was just a directory on a partition , I couldn't simply use dd to take a copy and place it back after removing Berryboot. I also couldn't simply create a blank partition and copy the existing data into it - only the modifications were stored as normal files/directories, and it was practically certain that some files had not been modified and as such would be missing.

I came up with a plan that would (hopefully) work, and should anyone else need to do this, the steps are below:

  1. Create a tarball of the filesystem (Compression is optional, but you likely want to) - Make it's not on the SD card itself because it will be erased in the next step.
  2. Download (and extract) the latest release of Raspbian. Use dd (or whatever tool is appropriate) to write the resulting disk image to the SD card. Be very careful when using dd because it's very easy to overwrite the wrong partition and lose data.
  3. Extract the tarball onto the root filesystem of your Raspbian. All files that were on your original installation will be on this one too, while any missing/absent files will remain from the freshly-installed Raspbian.

It took a while, but I successfully performed all of these 3 steps (I also took the opportunity to make good use of GParted and resize the root filesystem before the first boot). The resulting system successfully booted and launched all the services that were previously running. However it was not an exact copy - The SSH keys changed for example.

Bedrock Linux

I am very much a fan of Linux, using it as my primary OS on my computer. Obviously, I have used multiple distributions of it. Each distribution has it's own independent software library that is integrated with the package manager and the system as a whole. (Note: I am very much aware that Linux From Scratch and similar exists. I'm talking about the general case where some form of package manager/management exists. )

This has some advantages:

  • No random downloading of installers/executables from the Internet like on Windows
  • You can browse and search for available software
  • Everything in the repositories follows a single set of standards / policies that the user can apply to any installed program.

All in all, it's a very wonderful user experience. However, it isn't perfect. Repositories provided are always finite. They cannot and will not include every program that exists, nor include variations of included programs. This can very easily become a problem, such as in the following situations:

  • You want a different version of the program than the one available in the repositories.
  • You want a program that simply isn't in the repositories.
  • You want a program that is in the repositories..but was created using options you want to change.

If you enter this situation, there are many many ways to manage/deal with it, each having their own trade-offs/side-effects but today I'm going to focus on one particular case: You are a user on Distro X that has somehow got into one of the 3 situations described above. While browsing the internet for solutions, you see that a package from Distro Y would get you out of this situation. How do you install that package from Distro Y onto your Distro X installation?

Normally, you simply can't. Distro Y packages are built to work on that one only, there's no support for Distro X and you can't even install it, since Distro X's package manager only supports the specific format used by Distro X. Even if you did get it to install, you'd have problems with dependencies and other cross-distro differences.

At this point you might be asking, 'What is Bedrock Linux and how does it come into this' to which I answer this: Bedrock Linux allows you to combine multiple installed distributions. You're not limited to just 'Arch Linux' or just 'Debian'. Instead, you can have both Arch and Debian installed and be using programs with each concurrently. Of course, those two are just examples - you can have any number of distros concurrently installed and functioning.

It should be obvious how this applies to the hypothetical situation above. For someone using Bedrock Linux, the above is mostly a non-issue as packages from Distro Y can easily be installed - even if most of the packages on your system come from Distro X. The full story of how this is achieved is somewhat complex and involves decent amounts of filesystem manipulation but to simplify, each distribution/chunk of files is called a stratum in Bedrock Linux terms. Aside from special strata, each stratum is a self-contained installation of a distribution. The combination of multiple strata as a single system results in something that not only has a much deeper pool of software to draw upon and use, but can leverage the strengths provided by each individual stratum.

Under Bedrock Linux, you can install Distro Y packages on a mostly-Distro-X system because that Distro Y package is installed into a complete functional installation of Distro Y. (and is accessible via a filesystem directory specially maintained by a Bedrock Linux component) There are certainly many other potential applications and use cases for Bedrock Linux, but this is one of the more obvious and immmediately useful ones.

Should you wish to find out more, there's plenty of documentation here.

first post

This is the first post to this example blog. To add new posts, just add files to the posts/ subdirectory, or use the web form.

This blog is powered by ikiwiki.