Friday, August 15, 2014

Debian/Ubuntu init breakpoints (How to run zerofree and power off correctly afterwards)

Most of the Debian information mentions break=init and then says “see the /init source for more options.”  Fortunately, I can use break=init and then dig through /init myself.

Here’s the possible list for Ubuntu 14.04 Trusty Tahr:
  • top: after parsing commandline and setting up most variables, prior to the /scripts/init-top script
  • modules: before loading modules and possibly setting up netconsole
  • premount (default for break without a parameter): prior to the /scripts/init-premount script
  • mount: prior to /scripts/${BOOT} (which is set to "local" in my VM but comments indicate it can be otherwise under casper)
  • mountroot: prior to actually mounting the root filesystem
  • bottom: prior to running /scripts/init-bottom and mounting virtual filesystems within the root
  • init: prior to unsetting all initramfs-related variables and invoking the real init
break=init drops to a busybox shell with the root device filesystem mounted at $rootmnt which is /root on my system.  The virtual filesystems like /proc are also mounted under there, in the device’s root directory, not the initramfs’ root.  This is actually the state I want to be in, and digging out that list of break options was entirely irrelevant.  My apologies for the unintended shaggy-dog story.

From here, to run zerofree /dev/sda1 (your device name may vary) and shut down the system correctly afterwards:
  1. Boot with break=init
  2. chroot $rootmnt /bin/sh
  3. zerofree /dev/sda1
  4. exit
  5. sync
  6. poweroff
I just need to chroot into the disk first, so that zerofree can read the mount table from /proc/mounts (it doesn’t run on a writable filesystem.)  Then to clean up, I return to the initramfs where I can use its working poweroff command.

I used to use init=/bin/sh to get a shell inside the root mount, but then I didn't have a way to shut down the system cleanly.  In the image, shutdown wants to talk to init via dbus, neither of which are actually running, and it simply delivers a cryptic message like “shutdown: Unable to shutdown system”.  Recovery or single-user modes didn't work because the filesystem was already mounted read-write and enough services were up to prevent remounting it as read-only.


Apparently in spite of automatically updating OpenSSL to the newest version, my server was just sitting around vulnerable for a while. unattended-upgrades, you had one job, to get security updates applied without me pushing buttons manually.

I guess I've got to write a cron job to reload all my network-facing services daily, in case some dependencies are ever updated again. Because for all its strong points, the package system doesn't care if your packages are actually in use.

Thursday, August 14, 2014

On the Windows XP EOL

I discovered that some problems people had connecting to our shiny new SHA-256 certificate in the wake of Heartbleed were not caused by “Windows XP” per se, but by the lack of Service Pack 3 on those systems.

SP3 itself was released in 2008, meaning that SP2 had a two-year “wind down” until it stopped receiving support in 2010. That means everyone who had problems with our certificate were:

  1. Using an OS that has been obsoleted by three further OS versions if you include Windows 8.
  2. Using an OS that had reached its actual end-of-life after ample warning and extensions from Microsoft.
  3. Using a version of that OS which had been unpatched for nearly four years.

Combine the latter two, and you have people running an OS who never installed the SP3 update during its entire six-year support lifetime, which is longer than Windows 7 had been available.

In light of this, I can see why businesses haven’t been too worried about the end-of-life for Windows XP. It’s clear that those affected are not running SP3 on those systems, meaning they were already four years into their own unpatched period.

And if they “just happen” to get viruses and need cleanup, that just seems to be part of “having computers in the business.” Even if the machines were up-to-date, there would still be a few 0-days and plenty of user-initiated malware afflicting them. There’s little observable benefit to upgrading in that case… so little, in fact, that the business has opted not to take any steps toward it in half a decade.

Tuesday, August 5, 2014

GCC options on EC2

AWS second-generation instances (such as my favorite, m3.medium) claim to launch with Xeon E5-2670 v2 processors, but sometimes the v1 variant.  These chips are Ivy Bridge and Sandy Bridge, respectively.  AFAICT, this means that AMIs with code compiled for Sandy Bridge should run on any second-generation instance, but may not necessarily run on previous-generation instances.  What little I can find online about previous-generation instances shows a mix of 90nm Opteron parts (having only up to SSE3) and Intel parts from Penryn-based Harpertown up through Westmere.

GCC changed their -march options in the recently-released version 4.9.  GCC 4.8, which notably shipped in Ubuntu 14.04 (Trusty Tahr), used some inscrutable aliases; the equivalent to 4.9's sandybridge is corei7-avx for instance.  Here's a table of the subset that's most interesting to me:

GCC 4.8GCC 4.9

Broadwell and Westmere are not explicitly supported in the older release.  Based on its definition in the gcc-4.9 sources, I believe the equivalent set of flags for Westmere in gcc 4.8 would be -march=corei7 -maes -mpclmul.  And naturally, the corei7-avx2 option for Haswell would be the best for targeting Broadwell.