changeset 3:45362e07941c

Add more old posts.
author Samuel Hodgkins <samuel.hodgkins@sky.com>
date Sun, 27 Aug 2017 03:02:44 +0100
parents 6891988cc526
children d3fa58cdce0d
files posts/How_I_almost_sucessfully_installed_Gentoo_Linux.md posts/Managing_dotfiles_with_vcsh_and_mr.md posts/Moving_a_Raspberry_Pi_3_from_Berryboot_to_just_plain_Raspbian.md
diffstat 3 files changed, 146 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/posts/How_I_almost_sucessfully_installed_Gentoo_Linux.md	Sun Aug 27 03:02:44 2017 +0100
@@ -0,0 +1,45 @@
+I'm not a distro-hopper by any means, but even so I have tested/tasted a number of Linux distributions. 
+Primarily, these have been in the Debian family: X/K/Ubuntu, Debian itself, Raspbian and likely more that I'm forgetting.
+I was recommended and used very happily for a good while Arch Linux (experiments with GNU Guix on it notwithstanding), until my hard drive began dying one day.
+I had heard that Tumbleweed was also rolling-release, and provided interesting rollback functionality out of the box using BTRFS and Snapper, so I installed it on a spare USB stick.
+Recently, I was thinking about Gentoo Linux. I was mainly thinking of the perhaps not-entirely-accurate idea that it would take substantial time to install due to the requisite amounts of compiling
+I also thought that the difficulty level of the installation was roughly equivalent to that of Arch,. I wanted to see if my thoughts/perceptions were right, so I planned to install and migrate to Gentoo.
+This led to a sequence of events that can be divided into approximately 3 parts.
+
+# Part I: The actual installation of Gentoo
+Much like Arch Linux, Gentoo has a [comprehensive wiki]() filled with documentation about not only the installation procedures but also a large number of other things that are needed post-install. 
+This is a very good thing, because documentation is very useful when installing either distribution (especially if you haven't done it before).
+As such, I mostly ended up following the [Gentoo Handbook]() which provides a well-written resource much like Arch's own installation guide (except it seemed more organized and structured into steps).
+Seeing as I was going to install Gentoo onto an existing filesystem (as a BTRFS subvolume) and was installing from an existing Linux rather than a CD, I could ignore 3 segments of the first part.
+The remaining installation steps looked like this:
+
+1. Download (and extract) a precompiled base system (a stage3 tarball)
+This stage was very easy, only a couple of commands to execute with no decisions to make.
+2. Set appropriate compiliation settings
+At this point I needed to select what compiliation flags I would be using as well as decide how many parallel jobs `make` should be running.
+I decided to go with the default set of flags, only tweaking it to target GCC towards my specific CPU type (`-march=amdfam10`) and to also follow the recommendation for job count so that `make` could run up to 5 tasks in parallel.
+This was a very good decision - for one thing it made sure that compiling felt very fast and also ensured that all of my CPU's capacity could be used by the process if needed. 
+3. Enter the installed base system and configure/update Portage (Gentoo's package manager)
+This step was also rather easy, a bit of copying files around and a few commands. I selected the generic 'desktop' profile, not seeing one more accurate.
+4. Rebuild the world
+Now that I had selected my profile, I needed to update my system to include the changed settings/flags that came with the new profile.
+Additionally, I needed to install the additional software selected to my profile. 
+In short, what I (or Gentoo's portage) actually did could be succinctly explained with this image:
+
+![COMPILE ALL THE THINGS](https://cdn.meme.am/instances/500x/71652744.jpg)
+
+I expected that this would be the longest part of the installation, and that was a correct expectation. Compiling 164 packages does take some time.
+However, it didn't take as much time as I imagined it to, things felt pretty fast actually. Building a generic linux kernel from scratch and installing it only took ~1h. 
+I attribute this unexpected speediness to the benefits of passing `-j5` to make - Allowing 4 files to be compiled at once while using an entire CPU core speeded things up very nicely, while a 5th task meant there was almost always something to do when it was otherwise idle.
+5. Configuration of USE flags/locale/timezone
+At the present time, I decided to not really touch the USE flags immediately as they could be easily modified later as an when I needed to.
+I set the locale & timezone in accordance with my physical location (the UK).
+6. Compiling and installation of the kernel
+I decided that rather than start with a custom kernel configuration that may or may not boot, I would instead start with Genkernel, which would provide me a base from which to customise my own kernel.
+Considering that the result was a rather generic kernel, it was a bit surprising that it only took an hour or so to compile and install the kernel from scratch.
+7. General system configuration
+In this stage, I wrote /etc/fstab as well as configuring the network (simply automatically running DHCP on the only ethernet interface).
+I also assigned the system a hostname, and made sure that OpenRC used the correct keymap and started the network at boot-time.
+Before moving on to bootloader configuration, I selected what initial optional services I wanted installed and running at boot. These included a system logger, a cron daemon as well as `mlocate`.
+
+The next stage was bootloader configuration, but I think discussion of that would fit better in Part II. This post is getting somewhat long, so that'll be in another post in a short while.
\ No newline at end of file
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/posts/Managing_dotfiles_with_vcsh_and_mr.md	Sun Aug 27 03:02:44 2017 +0100
@@ -0,0 +1,76 @@
+Over time, a Linux user may customize and configure his environment rather substantially.
+These modifications are stored in a collection of configuration files / data known as 'dotfiles' (because the first letter of many of them is '.').
+For multiple reasons, it is very beneificial if you track, control and synchronise all of your personal dotfiles, a few example reasons include:
+- Having an additional backup
+- Being able to see their history, how they changed over time
+- Being able to rollback changes if needed (I haven't needed this yet)
+- Being able to use the same set of files accross multiple physical/virtual machines
+- Being able to share you configuration with the world so people can learn from it just like you learn from other people's.
+
+However, there is not one single universal method for managing them, instead there are many tools and approaches that one can take.
+GitHub provides a decent list of programs [here](https://dotfiles.github.io/) but I intend to summarize the main approaches below.
+(It may be worth noting that while the methods may not be mutually exclusive, there is one 'main' approach/method per tool and that is what counts.)
+
+1. Symlink-driven management involves moving the dotfiles away from their original location, instead creating symbolic links to one or more destinations.
+   There are many ways/approaches of doing this, but the simplest is to just have a single directory be the destination for all the links.
+2. VC(Version Control)-driven management involves less management of actual dotfiles compared to the other two. Instead of copying or using symbolic links,
+   instead a version-control system is primarily used to track/manage dotfiles in groups. The original dotfiles are left in place, instead they can be treated 
+   just like every other repository. There are multiple methods of implementing this approach with their own unique advantages and drawbacks.
+3. Configuration-driven management involves using explicit configuration file(s) to determine exactly what dotfiles are to be managed/tracked as well as how they are to be tracked among other things.
+   The key difference between this method and the others is that rather than using interactive commands to manage and modify dotfiles, one or more configuration files are used. 
+   Typical formats for this information include YAML/JSON or a purpose-built configuration format. Typically but not exclusively uses symbolic links for dotfiles.
+
+I have been tracking my dotfiles for short-to-moderate period of time. I originally started when I read an article about using GNU Stow as the management tool.
+Stow has some features that make it just as useful for this as a dedicated too: It supports 'packages' so you can choose to install part of the dotfiles.
+It also doesn't make you specify specifically which files to symlink, it just symlinks the entire package.
+However, it's definitely not perfect: Symlinks can be overwritten, Moving dotfiles and replicating directory structures sucked, and you could only manage operations from the right directory.
+(I could also only easily have 1 VCS repo, which effectively meant private dotfiles couldn't be tracked)
+
+One day, while inspecting my ~/dotfiles I noticed that the .git directory was missing. I could've seen this as a disaster, but I didn't.
+I had been thinking about migrating away from Stow for a while, but I never actually did anything - so I took this opportunity.
+After some reading/googling, I had made the decision to use `mr` and `vcsh`.
+`vcsh` would provide each individual repository, public and private while `mr` would be used for higher-level tasks.
+There are multiple guides to this pair of tools, such as:
+
+  * [This very short post on vcsh & mr](https://sumancluster.wordpress.com/2015/05/29/managing-dotfiles-using-vcsh-and-mr/)
+  * [This one which  links to a more in-depth tutorial but also takes a look at the internals](https://www.kunxi.org/blog/2014/02/manage-dotfiles-using-vcsh-and-mr/)
+  * [This very useful and detailed one](http://srijanshetty.in/technical/vcsh-mr-dotfiles-nirvana/)
+
+When I was migrating, I particularly found the latter link to be rather useful due to the detailed explanations of multiple common tasks.
+However, should you not want to read any of the above links I will attempt to give an overview of how it works in practice.
+
+# Creating a new repository
+
+1. Clone/Initialize the local vcsh repository
+2. Update the myrepos(mr) configuration to include that repository
+3. Add the wanted stuff to the vcsh repository
+4. Write/generate a .gitignore and modify as needed
+5. Commit to the vcsh repository and push both sets of changes as needed.
+
+# Updating an existing repository
+
+1. You can prefix git operations with vcsh and then the repo name to perform them on the repository.
+2. Alternatively, use 'vcsh enter' to go into an environment where git can be used normally.
+
+# Updating *all* the repositories 
+
+1. Use `mr up` and let myrepos do the job it was designed to do.
+
+# Bootstrapping the dotfiles
+(presuming git is installed. If not, install it.)
+
+1. Install myrepos and vcsh. If there's no distribution package, a manual install is easy (they're just standalone scripts)
+2. Obtain your myrepos configuration.
+3. Use `mr up` and let myrepos obtain all your repositories as needed.
+
+If you think the above workflow looks interesting, I recommend you have a nice read of the above links - especially the last one
+as I found it very useful. However, I have not moved my entire collection of dotfiles over and yet I still have some interesting problems/caveats to tackle.
+
+Firstly, while using a (private) Git repository to track my SSH/GPG data is useful, certain files have special filesystem permissions which Git does not preserve. While this can be solved with a chmod or two, it may grow
+to be more difficult if I need more of these files in the future - plus I might be able to automate it using mr's 'fixups' functionality.
+
+Secondly, this is more of an observation than a problem: I'm currently using an Apache-style configuration involving both *'available.d'* and *'config.d'*. This works and is flexible, but it'd be simpler to only have a single directory and have equivalent information be part of the configuration itself.
+
+Thirdly, bootstrapping from a completely clean slate is rather complicated. Certain repositories may depend on others to work / be in the correct location. Then there's the problem of access to private repositories, perhaps HTTP(s) could be used to download SSH keys using pre-entered cached credentials? A similar but lesser problem exists with GPG. Public repositories have no issues with this - if need be, they can have the master remote be changed afterwards.s
+
+Anyway, that's all for now. If and when I solve the above issues I'll make sure to explain and blog about each my solutions. Until then, I don't expect this to come up again.
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/posts/Moving_a_Raspberry_Pi_3_from_Berryboot_to_just_plain_Raspbian.md	Sun Aug 27 03:02:44 2017 +0100
@@ -0,0 +1,25 @@
+For a while now, I've had a Raspberry Pi 3, replacing my original Pi. The Pi, inside of my chosen case looks something like the below image
+
+<img src="https://cdn.shopify.com/s/files/1/0174/1800/products/Rainbow_1_of_3_47e94e82-ba3a-4804-a280-3140109cd304_1024x1024.jpg?v=1456669057" alt="Raspberry PI 3 with Pibow case" width="200" height="200"/>
+
+Originally, I thought that it'd be cool to be able to install/uninstall/update multiple distros on it. NOOBS can do this but I believe you can't do it while retaining existing data whenever you add/remove OSes / distributions. Instead, I became aware of (and chose) Berryboot instead. It provided a decent boot menu to select what you wanted to boot from while enabling you to add/remove new items without affecting existed installed ones. It did this by not giving each item it's own partition - instead, it stored the initial download as a filesystem image and used AUFS to persist any user-made changes to the downloaded system. 
+
+As time passed, I never actually used this functionality - my Pi 3 always booted Raspbian, I never bothered to even *install* anything else, never mind use/boot it. I continued to use Berryboot, even if I didn't really need it and would do just fine with a simple plain Raspbian install because it caused no issues (that I noticed anyway).
+
+One day, the time came to reboot my Pi. I had done this multiple times before without any issues. However, on this attempt all I got after the reboot was that 4-pixel rainbow screen stuck there. I did some googling / research on this problem led to me to [this](https://github.com/maxnet/berryboot/issues/293) GitHub issue. It says that after upgrading the installed OS, a reboot may cause the  exact same symptoms that I saw. 
+
+I had two options:
+
+* Replace the problem Berryboot files with copies from the installed OS.
+* Somehow get rid of Berryboot and boot Raspbian directly...while preserving the exact state and data of my install of Raspbian.
+I chose the second option, reasoning that it'd be simpler and possibly more performant too (NO use of AUFS, just direct writes to the underlying media.)
+
+Now that I had chosen to remove Berryboot, I had to face the problem of migrating all my data/configuration. Since all my modified data was just a directory on a partition , I couldn't simply use `dd` to take a copy and place it back after removing Berryboot. I also couldn't simply create a blank partition and copy the existing data into it - only the modifications were stored as normal files/directories, and it was practically certain that some files had not been modified and as such would be missing.
+
+I came up with a plan that would (hopefully) work, and should anyone else need to do this, the steps are below:
+
+1. Create a tarball of the filesystem (Compression is optional, but you likely want to) - Make it's not on the SD card itself because it will be erased in the next step.
+2. Download (and extract) the latest release of Raspbian. Use `dd ` (or whatever tool is appropriate) to write the resulting disk image to the SD card. Be **very careful** when using `dd` because it's very easy to overwrite the wrong partition and lose data. 
+3. Extract the tarball onto the root filesystem of your Raspbian. All files that were on your original installation will be on this one too, while any missing/absent files will remain from the freshly-installed Raspbian.
+
+ It took a while, but I successfully performed all of these 3 steps (I also took the opportunity to make good use of GParted and resize the root filesystem before the first boot). The resulting system successfully booted and launched all the services that were previously running. However it was not an exact copy - The SSH keys changed for example.