Resolving Windows 2008 SSL problems

While performing some maintenance and updates on an ancient Windows 2008 Server VM, I upgraded it to Service Pack 2, in an attempt to resolve various ongoing issues.

Shortly afterwards, I discovered Windows Update was no longer working; it gave an obscure failure code and all the usual efforts to revive it, such as flushing the software distribution cache, were unsuccessful. No matter — Windows 2008 Server is out of support anyway, so there are unlikely to be any important updates, right?

Some time later, I discovered that the recently upgraded VPOP3 mail server, running on the same machine, was complaining that it couldn’t access its activation server to verify its license. This wasn’t a critical failure, but it resulted in reduced functionality.

Checking the mail server logs revealed this error message:

Server certificate verification failed. Connection aborted" Windows code 0x80090302

Hmm. I used Wireshark to monitor the network traffic when VPOP3 started up, and could see it issuing a HTTPS request to the remote activation server. However, the session ended immediately after the initial handshake — clearly something was amiss. I verified that Chrome running on the same machine could access the Activation website successfully; the issue appeared to be with the Windows 2008 SSL libraries.

A check of the Windows Application event log revealed lots of repeated CAPI2 error events every 10 minutes:

Failed extract of third-party root list from auto update cab at: with error: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider.

Next step was to visit the Applications and Services Logs in Event Viewer, then navigate to Microsoft / Windows / CAPI2 and right-click on the Operational Log to Enable CAPI2 logging. After doing so, a lot more detail about CAPI2 activity started to appear. (It’s a good idea to also increase the CAPI2 log size to 4096 Kb in Properties, as the default 1024 Kb can fill up very quickly.)

Repeating the activation test with this additional debugging enabled revealed a new error:

Very odd — apparently the Microsoft Root Certificate Authority 2010 was now considered untrusted. Just in case it had become corrupted, I used certmgr.msc to export a copy of this from another, newer machine and import it to the Windows 2008 Server instance, but this made no difference.

After some more research, I discovered John Thaller’s useful Root Certificate Updates For Legacy Windows github repository. Surely this would sort things out? Unfortunately, while it installed without issue, the problem remained.

Eventually, after a lot of further searching, I came across a Microsoft article from 2019 which described a plan to move Windows Update Services from SHA-1 signatures (now considered insecure) to SHA-2. A key part of this plan was that after August 2020, the Windows root certificates would no longer validate SHA-1 signatures, and all Windows Update Services would require SHA-2.

However, Windows 2008 Server SP2 doesn’t actually support SHA-2 verification. No problem – Microsoft were pushing an update that would add the needed SHA-2 support, so once that update was installed ahead of the transition deadline, everything would be good.

Christmas 2023 turned out to be NOT ahead of the transition deadline (missed it by more than three years, in fact) which explains why the server couldn’t access Windows Updates any more. As VPOP3’s Activation server appears to rely on a Microsoft certificate chain for authentication, it too fell victim to the lack of SHA-2 support.

Fortunately, there is a straightforward solution, as Microsoft describe in a follow-up article on How to Update Windows Devices to SHA-2. Microsoft made available a standalone SHA-2 update download page that has the correct installation file for a variety of different scenarios. Pick the one that matches your system, download and install, reboot, and you’re all set.

And indeed, after doing just that, everything started working normally once again!

All’s well that ends well, then. However, I do feel there was a missed opportunity here for the CAP2 logging to be a bit more explicit about the fact that it was the lack of SHA-2 signing support in the OS, or at least the lack of a verifiable signature in the provided certificate, that was the root cause of the failure – that would have saved a lot of time!

Command line control for Zigbee Smart Plugs

As an embedded systems developer working primarily from home, I often need to remotely power-cycle development boards located in the office.

In the past, I’ve used a Denkovi USB Relay 4-port controller with some success. Wiring individual power supplies through this is fiddly though, and It’s not convenient to swap between different power supplies. It does provide a simple Windows GUI to control the outputs however.

I’ve also used a Lindy IPower Switch Classic 8, a network attached power strip which includes an Ethernet port and a built-in web UI to allow any socket to be turned off or on remotely. This works very well but at over £300, it’s a pricey option. Also, it uses IEC C13 sockets (as you might find on a UPS) which don’t work well for connecting standard wall-wart power supplies.

I’ve been looking for a simpler, cheaper solution. Zigbee Smart Plugs are widely available now, and very affordable, so adding a cheap USB Zigbee Controller to my office PC should let me turn them on and off with a simple command line instruction, right?

Of course, it turns out to be a bit more complicated than this. Most online guides recommend installing Home Assistant, an impressive package that can control your whole house. While it can definitely turn on and off a few Zigbee smart plugs, running it under Windows means setting up a Virtual Machine to host it. That seems like a lot of effort and CPU use for such a simple requirement.

Instead, I’ve figured out how to do it with some more basic tools. If you’re trying to do the same, follow along.


Individual Zigbee devices communicate with the Zigbee USB dongle by sending (“publishing”) occasional messages, and receiving (“subscribing” to) messages from other devices or applications. A message broker sits between the devices and applications and co-ordinates these messages. Messages are published to specific topics, which are structured like paths in a filesystem; choosing the correct topic name is how an application targets a message at a particular device.

A key point is that a device does not need to be online when an application sends it a message. Similarly, a device can send a message without the recipient being connected. The broker keeps track of all of this and makes sure any pending messages are delivered the next time the recipient wakes up. This allows ZigBee devices to consume very little power as they don’t need to be constantly awake.

The protocol used for all of this is called MQTT. You’ll config files and diagnostic messages referencing URLs such as mqtt://localhost:1883/ which are used to identify a message broker listening on a particular sever and port. Mosquitto is the message broker we’ll use here; it’s a Windows service that implements the MQTT protocol.

As Mosquitto doesn’t know anything about ZigBee, we need another package to convert between the MQTT messages and raw ZigBee device operations. This package is Zigbee2MQTT.


Most Zigbee hardware is well supported by Zigbee2MQTT, so it may pay to shop around. LIDL periodically have Smart Plug offers which reputedly work well. (You may need to replace the proprietary Tuya manufacturer firmware with an Open Source equivalent). For a simple life, I used these plugs from Amazon UK which worked out of the box:


These packages are for Microsoft Windows. They are also available on Linux but you’ll need to adapt the instructions accordingly. I’ve listed the versions I used, but generally, go for the latest public release available in each case.

  • Node v16.15.0 for Windows x64 – used to provide the environment for ZigBee2MQTT
  • Mosquito 2.0.14 for Windows x64 – the message broker used to co-ordinate messages sent between applications and Zigbee devices
  • Zigbee2MQTT-Master – the software bridge that sits between Zigbee devices on the USB dongle and the message broker.
  • MQTT Explorer – a useful diagnostic tool to explore connected devices and test sending commands to them.


Run the Node installer. You may wish to click the box for “Automatically install the necessary tools. Note that this will also install Chocolately” but it’s not necessary for this specific application.

Next, run the Mosquitto installer. After installation, you will have a new Windows system service called “Mosquitto Broker”. Start this via the Windows service manager, or from an Administrator command prompt using the command net start “mosquitto broker”. (It will automatically start after a system reboot.)

To install Zigbee2MQTT, visit the Zigbee2MQTT repository page and use the green Code button to download a .zip file of the repository. Extract this to apermanent folder on your local drive where it will live (e.g. C:\Dev\Zigbee). Then open a command prompt, change to this folder and run “npm ci” to setup all the node dependencies.

After installation, edit the config file data\configuration.yaml and ensure it looks similar to this:

# Home Assistant integration (MQTT discovery)
homeassistant: false

# allow new devices to join
permit_join: true

# MQTT settings
  # MQTT base topic for zigbee2mqtt MQTT messages
  base_topic: zigbee2mqtt
  # MQTT server URL
  server: 'mqtt://localhost'
  # MQTT server authentication, uncomment if required:
  # user: my_user
  # password: my_password

# Serial settings
  # Location of CC2531 USB sniffer
  port: \\.\COM8
  adapter: deconz

The significant fields are highlighted. permit_join should normally be false but when setting up a new system, set it to true – this allows your smartplugs to pair with Zigbee2MQTT automatically.

Set the port field to the Windows COM device assigned to your USB Zigbee dongle. This may change if you move it to a different USB port so it’s best to always leave it in the same port if possible. The easiest way to identify the COM device is to open Windows Device Manager, expand the Ports tree and look at the COM ports that are currently listed. If there are several, unplug the Zigbee Dongle and check which one disappears; then reconnect the Dongle and you will see it reappearing. On my system, it appears as COM8.

The adapter: deconz field is needed when you are using the ConBee II USB dongle; if you are using a different brand, this may not be required. It ensures Zigbee2MQTT uses the right protocol to control the dongle.

After editing the configuration, start Zigbee2MQTT with the command npm start. If this is the first time, it will compile the system; subsequent starts will be faster.

In any case, you should shortly see output similar to this:

Building Zigbee2MQTT... (initial build), finished
Zigbee2MQTT:info  2022-06-15 10:28:57: Logging to console and directory: 'C:\Dev\Zigbee\zigbee2mqtt-master\data\log\2022-06-15.10-28-57' filename: log.txt
Zigbee2MQTT:info  2022-06-15 10:28:57: Starting Zigbee2MQTT version 1.25.2 (commit #unknown)
Zigbee2MQTT:info  2022-06-15 10:28:57: Starting zigbee-herdsman (0.14.34)
Zigbee2MQTT:info  2022-06-15 10:28:57: zigbee-herdsman started (resumed)
Zigbee2MQTT:info  2022-06-15 10:28:57: Coordinator firmware version: '{"meta":{"maintrel":0,"majorrel":38,"minorrel":114,"product":0,"revision":"0x26720400","transportrev":0},"type":"ConBee2/RaspBee2"}'
Zigbee2MQTT:info  2022-06-15 10:29:10: Currently 0 devices are joined:
Zigbee2MQTT:warn  2022-06-15 10:29:10: `permit_join` set to  `true` in configuration.yaml.
Zigbee2MQTT:warn  2022-06-15 10:29:10: Allowing new devices to join.
Zigbee2MQTT:warn  2022-06-15 10:29:10: Set `permit_join` to `false` once you joined all devices.
Zigbee2MQTT:info  2022-06-15 10:29:10: Zigbee: allowing new devices to join.
Zigbee2MQTT:info  2022-06-15 10:29:11: Connecting to MQTT server at mqtt://localhost
Zigbee2MQTT:info  2022-06-15 10:29:11: Connected to MQTT server
Zigbee2MQTT:info  2022-06-15 10:29:11: MQTT publish: topic 'zigbee2mqtt/bridge/state', payload 'online'
Zigbee2MQTT:info  2022-06-15 10:29:12: MQTT publish: topic 'zigbee2mqtt/bridge/config', payload '{"commit":"unknown","coordinator":{"meta": "maintrel":0,"majorrel":38,"minorrel":114,"product":0,"revision":"0x26720400","transportrev":0},

If you have some Smartplugs connected to nearby sockets, and they are in pairing mode (which is the default when they are brand new), they may automatically register. Then you’ll see additional entries like this:

Zigbee2MQTT:info  2022-06-15 10:29:13: Device '0x00124b0024c1007e' joined
Zigbee2MQTT:info  2022-06-15 10:29:13: MQTT publish: topic 'zigbee2mqtt/bridge/event', payload '{"data":{"friendly_name":"0x00124b0024c1007e","ieee_address":"0x00124b0024c1007e"},"type":"device_joined"}'
Zigbee2MQTT:info  2022-06-15 10:29:13: MQTT publish: topic 'zigbee2mqtt/bridge/log', payload '{"message":{"friendly_name":"0x00124b0024c1007e"},"type":"device_connected"}'
Zigbee2MQTT:info  2022-06-15 10:29:13: Starting interview of '0x00124b0024c1007e'
Zigbee2MQTT:info  2022-06-15 10:29:13: MQTT publish: topic 'zigbee2mqtt/bridge/event', payload '{"data":{"friendly_name":"0x00124b0024c1007e","ieee_address":"0x00124b0024c1007e","status":"started"},
Zigbee2MQTT:info  2022-06-15 10:29:13: MQTT publish: topic 'zigbee2mqtt/bridge/log', payload '{"message":"interview_started","meta":{"friendly_name":"0x00124b0024c1007e"},"type":"pairing"}'

This means that a new device has been registered, and some MQTT topics have been created for it.

Finally, install MQTT Explorer. When it runs initially, you’ll see an MQTT Connection dialog. Use the ‘+‘ icon in the upper left corner to add a new connection called ‘localhost‘ with the Host field also set to ‘localhost‘.

If you open this connection, it will connect to your local Mosquitto broker. On the left, you’ll see a tree view like this:

If you click on the devices line, the right-hand panel will show you the contents. This is a low-level view of the raw data, so it won’t mean too much to you at this point. It’s useful for checking if your devices have registered yet.


What next? If you stop Zigbee2MQTT and check its data\configuration.yaml file once again, you may see some new entries at the bottom like this:

      friendly_name: '0x00124b0024c1007e'
      friendly_name: '0x00124b0024c10079'
      friendly_name: '0x00124b0024c1007a'
      friendly_name: '0x00124b0024c10071'

These have been automatically added by Zigbee2MQTT during its discover phase. You may like to set the permit_join field to false to stop additional devices registering automatically.

You can edit the friendly_name for each device to something more recognisable. For now, I suggest ‘plug1‘, ‘plug2‘, ‘plug3‘ and ‘plug4‘.

Restart Zigbee2MQTT (‘npm start‘) and now the friendly names can be used to manipulate the plugs.

(If you get an error when restarting Zigbee2MQTT, either your configuration file has an error or you have another copy already running in the background. The console output usually indicates a more precise cause of failure.)

Plug Control

To recap, the Mosquitto Broker service is running in the background, acting as a central message co-ordinator, and Zigbee2MQTT is providing a bridge between Mosquitto and the USB dongle.

All we need now is a way to send messages to the smart plugs to turn them on and off. This is done by publishing messages to special topic names corresponding to the friendly names we assigned earlier.

The quickest way to do this is using MQTT Explorer. When connected to localhost, open the Publish pane on the right-hand side of the main window. Under topic, enter zigbee2mqtt/plug1/set and select json as the data format.

In the text box, enter this json string:

{ "state" : "ON" }

and click Publish. This will set the state attribute of plug1 to ON. The syntax here needs to be precise; both the attribute name and attribute value must be enclosed in double-quotation marks, and they are case-sensitive.

If everything is working, then plug1 should turn on and any connected device will power up. You can repeat this with the state set to OFF or TOGGLE, which should work as expected. Change the topic field to reference plug2, plug3, etc. to control other devices.

This works well for testing but is a little clunky to do on a regular basis. To achieve the same effect from the command line, use the mosquitto_pub tool included with Mosquitto. This is usually found in “C:\Program Files\Mosquitto“.

To turn on plug1 from a DOS command prompt, use this command:

    mosquitto_pub -h localhost -t "zigbee2mqtt/plug1/set" -m "{ ""state"": ""ON"" }"

Note the repeated double-quotation marks around “state” and “ON” in the json string. These ensure the quotation marks are not stripped out by the command line parser before they reach mosquitto_pub.

This command and similar variations can be easily encapsulated in a small batch file to automate things further. Here is a script called resetplug.cmd to reboot a device connected to a specific plug:

@echo off
set DELAY=3
set MQTTPUB="C:\Program Files\Mosquitto\mosquitto_pub.exe"
if not z%1==z goto action
echo "Usage: resetplug <plugname> [ <delay> ]"
goto done

echo Resetting %1...
set PLUGPATH=zigbee2mqtt/%1/set
if not z%2==z set DELAY=%2
%MQTTPUB% -h localhost -t "%PLUGPATH%" -m "{ ""state"" : ""OFF"" }"
timeout /t %DELAY% >nul
%MQTTPUB% -h localhost -t "%PLUGPATH%" -m "{ ""state"" : ""ON"" }"


You can then use the command “resetplug plug1” to reset the device connected to plug1. For devices needing a longer reset period, add an optional delay, e.g.”resetplug plug3 5“.

After issuing these commands, you will notice additional topics appearing in the MQTT Explorer view. If you click on these topics, you can see the state of the plug changing in realtime. (This is useful if you are debugging more complex control scenarios.)


Following these simple steps makes it easy to control smart plugs from the command line. You’ll need to manually start Zigbee2Mqtt if you reboot your PC, or more sensibly, create a batch file to do this and add it to your Windows Startup folder.

If you add additional smart plugs in the future, just set permit_join to true in configuration.yaml temporarily to allow the plugs to register, then add a suitable friendly name and you’re all set.

Windows 2000 0x0000007B INACCESSIBLE_BOOT_DEVICE

With luck, you’ll never need any of the information below. If you do then (a) you have my sympathy, and (b) you’re welcome!

One of my clients has an aging Microsoft IIS installation comprising a variety of Windows 2000 & 2003 servers (SQL, IIS, Domain Controller, File Server, Linux VMWare) running on a mixture of Dell server hardware.

The main IIS installation runs on Windows 2000 Server on a Dell Poweredge 1950 using a Dell SAS 5/iR disk controller, to which are connected two SATA drives running independently (not RAID-ed). In July, the controller card failed catastrophically, rendering the server useless.

Fortunately, though this is old hardware, we were able to source a replacement controller card on eBay. Less fortunately, when I installed the new controller and attempted to boot from it, the server crashed a few seconds after displaying the Win2K splash screen with the dreaded blue screen of death:

Not great. After some investigation, I realised that though the replacement looked identical in all respects to the failed card (even the discrete components were positioned identically on the PCB), it had a slightly newer BIOS. This, it appeared, was sufficient for Windows to treat the controller card as a new, unrecognised device.

If you have to replace a disk controller on a Windows server, the usual advice is to install the new controller first, allow Windows to detect it, install any needed drivers, then — and only then — shut down, remove the old controller, and connect the hard drives to the new controller. Windows then has the needed drivers installed to allow it to boot Windows successfully.

Of course, in this scenario we didn’t have that luxury – the old controller was dead, so Windows wouldn’t boot at all. We needed to somehow install the updated drivers on the Windows system disk offline.

This turned out to be … tricky! Here are a few of the things I tried before figuring it out. (Needless to say, I copied the disk onto a fresh drive and performed my experiments on the copy. This ensured the original was always available if I needed to start over.)

1. Update controller drivers – FAIL

I downloaded updated drivers for SAS 5/iR controller from Dell’s website (here), extract drivers, manually copy driver files to Windows c:\winnt\system32\drivers folder, overwriting the older versions with the same name. This made no difference at all.

Then I discovered the C:\WinNT\NLDRVS sub-folder which holds the core third-party drivers used by Windows before the whole plug & play subsystem is up and running. This contains a series of numbered sub-folders, one for each driver. Again, I updated this to use the latest versions of the driver files, and again it made no difference.

2. Repair Windows Installation – FAIL

Next, I attempted a Windows 2000 repair, which was a lot harder than I expected.

After tracking down the original Windows 2000 installation CD, I was unable to press F6 to install additional drivers from floppy (remember that?) because the PowerEdge 1950 has no floppy drive. Windows won’t recognise a USB flash drive at this point either.

There are hardware floppy emulators around that will accept a USB memory stick and present the contents as a floppy drive using the old 34-pin floppy cable standard. Unfortunately, the PowerEdge doesn’t have an internal header to connect such a device to. You can also buy floppy drives with a USB interface but I didn’t have one of those to hand (or any floppy disks to use with such a device).

Eventually, I discovered nLite, an excellent utility that lets you build a custom Windows installation CD which includes your selection of third-party drivers, service packs, and other customisations. I also found WinSetupFromUSB which lets you install a Windows installation CD on a USB stick in such a way that even the Windows 2000 Installer can successfully boot from it. (Some deep magic is used to make this work).

Between these, I was able to create a slip-streamed Windows 2000 SP4 installation CD with the latest Dell SAS 5/iR drivers pre-installed. Booting with this, I could get to the Repair Windows menu, find my Windows installation, and let the automatic repair try and fix it.

This was also unsuccessful – the automatic repair didn’t notice that the drivers it had booted with were different to the ones pre-installed on the original Windows disk, so it didn’t update them.

3. Perform an in-place Windows upgrade – Partial Success

By now, having spent a lot of effort trying various things, I figured there was only one thing for it – perform an in-place upgrade of Windows 2000 using the process outlined by this TechRepublic article. This is essentially a new Windows installation on top of the existing install. Windows is smart enough to replace the system files with fresh versions while preserving all existing third-party software and user profiles.

In principle, this allows you to resolve any hardware-related Windows glitches without having to re-install all your application software. This sounded good, because the mission-critical software running on this particular server is complex and the original designers and implementers were long since gone, leaving no documentation behind them. Recreating it from scratch on a clean Windows installation would have been unthinkable.

The re-install process went smoothly, albeit slowly, and once completed, Windows booted successfully. Hurray! Job done, right?

Well, not quite. The original installation had somehow ended up with the WINNT folder on E:\ while the tiny 2 GB FAT16 boot partition was on C:\. After the re-install, WINNT was located on C:\, along with Program Files and other system folders. This, of course, broke lots of things.

I was able to fix most of them by adding a scheduled task to SUBST drive E:\ to drive C:\ at startup, which made most of the system much happier. A few services started before this remapping occurred, and I located those in the Registry and updated their path references by hand. Yes, this is all ugly and horrible, but by this point, I just needed to get things working by any means!

(Word to the wise: be careful with removable USB backup drives, which usually grab the first available drive letter. If that happens to be E:, it stops the drive letter mapping working correctly and you’re back to square one.)

Microsoft Office was still a little unhappy, but became much happier after I carried out a Repair Install. I also had to re-assign appropriate drive letters to some of the data partitions.

Finally, after all of this …. IIS started correctly, websites were accessible, and all was right with the world! Hurray, again!

4. When is a success not a success?

Not so fast. One of the critical components of the website was the ability to upload formatted Word documents which were then automatically converted to XML for processing by the content management system. This wasn’t working correctly; in fact, it wasn’t working at all.

The issue seemed to be related to a custom COM object that had been developed for the project, and a method in this object was failing during the conversion of the Word document. Everything I could see indicated it was somehow connected to the Microsoft Office installation (since presumably Word itself was involved in the conversion).

I spent more than a week trying to get to the bottom of this. I re-registered all COM objects and relevant DLLs, checked the system and application logs for errors, enabled IIS debugging, etc – all the usual things you would expect. When I dug deeper, using Microsoft’s ProcMon tool, the issue seemed to be related to an instance of Internet Explorer that was launched during the conversion.

After many hours pouring over ProcMon, IIS and Event logs, checking for unexpected failures buried in the midst of the many, many expected failures, I had to admit defeat. The server was working, but it wasn’t working reliably. It also had a tendency to hang random services during startup, and Windows Update refused to start, neither of which inspired confidence.

5. The Easy Way

By this stage, and with the client’s patience starting to reach its limits, I decided to use the knowledge gained working through the above to have another go, starting from scratch with the original disk again.

A chance remark on a discussion forum about SCSI adapter BIOS signatures being used by Windows to help identify the correct drive led me to a rarely visited part of the Windows 2000 registry known as the CriticalDeviceDatabase.(This no longer exists on modern versions of Windows).

Further research brought me to Michael Albert’s invaluable page on manually adding a mass storage device to an existing Windows installation. As one commenter rightly said, “Never delete this page!” The information it contains is invaluable, and not easily found elsewhere. So, thank you Michael!

The registry key HKEY_LOCAL_MACHINE / System / Control / CurrentControlSet / CriticalDeviceDatabase contains a series of sub-keys for all the devices needed to boot Windows. Third-party controller cards are referenced here by their PCI vendor, device and (crucially) subsystem code.

First, I needed to get the PCI code for the SAS 5/iR controller. On most Windows installations, you can visit Device Manager, open the Properties pages for the controller, and under the Details pane select Hardware IDs. However, on Windows 2000 this information isn’t so easily available. Instead, you need to run MSINFO32 and find the controller there, usually under SCSI devices.

When I ran MSINFO32 on my flakey Windows re-installation, the SAS 5/iR entry looked like this:

Checking Regedit on the same machine, I could see the following matching entry in the registry:


However, there was a second, almost identical, entry:


The only difference is the subsystem code, which has changed from 0x1f061028 to 0x1f091028. I concluded that the additional entry was the one used by the old controller card, and that it had survived the in-place Windows upgrade. For reasons best known to themselves, Dell must have revised the sub-function code when they updated the controller’s BIOS, possibly to provide an easy way for the driver to identify hardware with additional capability or some obscure hardware fix.

I went back to the original disk and copied the registry System hive from \WINNT\SYSTEM32\CONFIG\SYSTEM to my work computer, then loaded it into RegEdit by selecting HKEY_LOCAL_MACHINE, then using Load Hive and entering a temporary sub-key name (W2K-Recovery) to allow me to access it.

Drilling down there, I could see registry keys for ControlSet001 and CurrentControlSet002 but no CurrentControlSet. This is normal when editing an offline registry hive — CurrentControlSet is created dynamically by the operating system but is not part of the hive itself. Instead, I checked under the Select key which confirmed that the ‘Current‘ selection was set to 1 (indicating CurrentControlSet001). And sure enough, under CurrentControlSet001 / Control / CriticalDeviceDatabase, there was an entry for subsystem 0x1f061028 but not 0x1f091028.

I made a fresh clone of the original Windows drive, then manually added the CriticalDeviceDatabase entry for 0x1f091028 without changing anything else. (Again, I performed this by loading the System hive offline into RegEdit on my main work PC, making the modifications, then unloading it again and copying it back to the WINNT folder on the target disk.)

After this, the new drive booted straight into Windows with no issue. As it was the original Windows installation, everything was back to exactly the way it was before the disk controller died.

As with everything Windows related, an ounce of knowledge is worth a pound (or stone!) of experimenting! If you’ve made it this far, hopefully the information above will save you some wasted time and effort.

Recovering a dead SSD

My development PC from five years ago was relegated to secondary status when I upgraded. Now it acts as my main backup server for the network, and also an occasional disk copy station when I want to copy or recover drives without disrupting my main machine.

It uses a Crucial CT500BX SSD as its C: drive and over the past few months this has been acting up — occasionally after installing Windows updates and rebooting overnight, the system would report no boot drive available. This was a minor pain since it would sometimes take several days before I noticed, during which time nightly backups weren’t taking place. It would always come back correctly when I went into the BIOS and reselected the correct boot device.

A few days ago, however, it stopped working entirely and no amount of coaxing would bring it back to life. I was at the point of just giving up, when I stumbled across a recovery technique that was I hadn’t seen before. I tried it, and it worked!

The trick is to unplug power & data cables from the SSD, then connect power-only and let it sit for 30 minutes. Then disconnect power for 30 seconds, reconnect it, and leave it for another 30 mins. Finally, power everything down, connect both power and data cables, and if all is well, then your drive will be working again.

So, what’s going on? It seems that most SSDs have a built-in failsafe which kicks in after 30 minutes without any data activity. This will scrub any internal cache, return various settings to default, and generally put the drive back to a known good starting state. Crucially, it does this without affecting your drive’s data.

How does the drive get into this bad state? It’s unclear, but may occur if you shut it down while it’s in the middle of an internal update operation; another factor may be its internal garbage collection, which erases data blocks that are no longer in use, to have them free for the next write operation — if a drive is running close to full, the garbage collector may be working overtime and could eventually get itself into a knot.

Regardless of the cause, I was sceptical – but it worked! It’s easy to try, and a good trick to have in your arsenal.

A good description of the precise procedure is on David Farquhar’s blog. Crucial themselves outline a similar process on their SSD FAQ. Once you know what to look for, there’s plenty of other discussion about this online too.

Windows 10 Default Route Vanishing (again)

My previous attempt to stop the Windows 10 default route vanishing turned out to be unreliable; after a few Windows Updates, the default route was gone again.

However, I’ve finally found a way to make a statically configured default route properly persistent across reboots (something that other operating systems, and older versions of Windows, have no trouble doing at all).

To recap: for various reasons, I assign static IP addresses to most of my PCs instead of using DHCP to allocate them automatically. As part of the configuration I specify a default gateway to the Internet, like this:


Windows 10 appears unable to remember this default gateway across reboots, especially when I have additional IP subnets configured on the same network interface.

This isn’t a big deal if I’m sitting in front of the PC, since I can fix it quickly. However, it’s a show-stopper if I need to access the PC remotely since it is no longer connected to the Internet. When it happens with our main family PC, it’s even more annoying since the steps to resolve it are not intuitive for my wife and sons.

Today, after yet another such unplanned outage (thank you, Windows Updates), a lightbulb finally went off in my head — why not use the existing Windows Persistent Route capability to add a persistent default route? I’ve now tried this approach on several PCs and it seems to work reliably!

So without further ado, if you too are suffering from this problem, here’s how to fix it.

  1. Open a command prompt with Administrator privileges by right-clicking on the Command Prompt option and selecting Run As Administrator, or pressing <WindowsKey>-<R>, typing CMD in the Run box, and pressing <CTRL>-<SHIFT>-<Enter>
  2. Type ROUTE DELETE to delete any existing default route; if you don’t have one at the moment, this will give an error which is fine.
  3. Type ROUTE ADD -p MASK to add a new default route using the IP router gateway; replace this with your own gateway IP.


That’s it! Your default route should now be added back automatically whenever you reboot. If you need to change to a different default route, just repeat the steps above.

Default route keeps vanishing on Windows 10

A number of my PCs have a persistent problem with the default route disappearing when the system is restarted. Since I use Remote Desktop to control most of these machines remotely, it’s rather annoying; if the PC reboots, I no longer have remote access.

I hoped it would correct itself when I upgraded to Windows 10, but it’s still happening. In fact, it’s much worse — Win10 feels it has carte blanche to reboot to install upgrades without asking my permission first and when it does, I have to visit the PC to reset the default route, or find someone to do it for me.

This all started a year or two ago, with a particular Windows 7 update (I’m not sure which one). The only common factor is that all the affected PCs use static IP addresses with a manually configured default route. It doesn’t occur when DHCP is used. Most of them also have multiple network adapters.

While I have a workaround using a startup script to manually re-add the missing route, it’s awkward to run this with the elevated command privileges needed to change the default route.

Google suggested various solutions, including:

  • Editing the network adapter and switching it to DHCP, exiting, then going back in and switching it to static again. This needs to be done twice to work.
  • Resetting the TCP/IP stack (by running netsh int reset from an elevated command prompt). This is quite drastic.

I tried both of these but neither fixed it for me.

Today, I found something that *did* work. I’m recording it here in case it helps someone else.

  • Run RegEdit and navigate to Computer\HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\Interfaces
  • Go through each of the interface GUIDs in turn.
  • One of them will have an IPAddress field matching your main network interface. On this one, confirm that there is a REG_MULTI_SZ field called DefaultGateway containing a single text line with the IP address of your default route. If it’s not there, create it. Similarly, there should be a DefaultGatewayMetric field, also REG_MULTI_SZ, containing the single string ‘0’.
  • On all the other interface GUIDs, delete any DefaultGateway and DefaultGatewayMetric fields entirely.

After completing these steps, restart your PC. The correct default route should now be configured.

How big is the Eircode database?

Ireland’s new postcode scheme, Eircode is officially launched today. It’s been a long time coming and like it or lump it, we’ll all be using it soon.

Eircode logoThere’s been plenty of debate and criticism of Eircode in recent months, some of it valid, some of it misplaced. However, I did read one thing that caught my attention — the suggestion that Eircode was impractical for use with portable GPS navigators because the full country-wide database would require 2 GB storage and exhaust their flash storage.

That sounds like a lot. Let’s see if it holds up to scrutiny. Continue reading How big is the Eircode database?

USB playback problems on Samsung TVs

My dad has a Samsung Smart TV (UE55H6400) which he uses mostly as a display device for his Sky+ box; the advanced features of the TV are a little beyond him.

Recently though, he’s been asking for a way to play movies locally. When we met for lunch, I gave him an 8 GB USB flash drive with some films and later, over the phone, I walked him through selecting the USB device for playback using the Samsung TV remote.

This should have been straightforward – it certainly is on my older Samsung TV. But no dice – whatever we tried, his TV wouldn’t recognize the flash drive. There are three USB ports, including one labelled “USB HDD” but it made no difference which one we used.

Perhaps I had accidentally formatted the flash drive using NTFS instead of FAT32? My mum, who is a lot more computer literate than my dad, plugged the flash drive into their PC and I examined it using Remote Desktop. Sure enough, it was formatted as NTFS. While the newest Samsungs can handle this fine, my Dad’s model is a few years old so conceivably it was expecting FAT32.

Copying the movies to the PC, reformatting the drive, and copying them back again took about 20 minutes (cheap USB flash drives are SLOW). Ultimately, it made no difference – the TV still refused to recognize the drive. We gave it up as a bad job.

A week later, I was visiting and had a chance to try it myself – exactly the same results. At least user error wasn’t to blame.

Eventually we figured it out – it’s rather obscure! It seems that at least some models of Samsung TV decide whether or not a USB device is “hard drive”-like based on the precise way it is formatted. A single partition doesn’t count; it needs to have a full partition table with multiple partitions for the TV to recognize it as a valid drive.

How do you format it like this? The easiest option is to use a special tool like RMPrepUSB – download the latest version from the list on the home page.

RMPrepUSB’s initial options screen looks rather daunting:


It’s not too bad though if you just follow through the numbered sections:

  • Select your USB drive from the list at the top
  • In section 2, type a suitable Volume Label and set the partition as non-bootable
  • In section 4, select FAT32 and choose “Boot as HDD”
  • Click the Prepare Drive button at the bottom and off you go.

After a brief delay, you’ll have a freshly formatted USB flash drive that is now recognized by your Samsung Smart TV. Copy movies, photos or music to it and have fun!

I should emphasise that this only seems to be necessary for some models of Samsung TV (and also, reportedly, LG TVs); it wasn’t needed for my own model.

If RMPrepUSB seems a little too daunting, you could also try Rufus by Pete Batard of Akeo, only up the road in Co. Donegal. While I haven’t had a chance to use it for this particular application yet, using the Advanced option “Add fixes for old BIOSes” option should have the same effect. Please let me know if it works for you!

From b2Evolution to WordPress

In January 2005, I wrote my first blog entry. b2Evolution was my tool of choice to manage my blog — it was free, simple to install, and more than adequate for my needs. I’d never heard of WordPress back then, though it shares a common origin with b2Evolution – they were both forks of b2/cafelog, one of the original blogging systems.

Fast forward ten years, and WordPress rules the world. I’ve used it on countless projects for clients and friends, and it’s an extremely flexible and powerful CMS. I’m also now far more familiar and comfortable using it than I ever was with b2Evolution.

Which is why, finally, I’ve migrated this blog  to the latest version of Wordpress, version 4.1. (If you’re wondering, the photo in the banner is Pan’s Rock in Ballycastle, Co Antrim, taken last April. Here are some more Antrim photos from the same trip.)

For those interested, the nitty gritty steps required are below; everyone else can stop reading now.

Database Migration

The biggest challenge was transferring the existing blog contents from b2Evolution’s database to WordPress’s database. Since many bloggers have travelled this path in the past, I expected this to be straightforward. However, most of them appeared to (a) be running a much newer version of b2evolution than me, and (b) have made the move long ago, to a much older version of WordPress.

No matter. First step was to find a script close to what I needed, in this case a script called import-b2evolution-wp2.php.txt at, referenced by Christian Cawley’s helpful b2evolution migration guide. Unfortunately, is no longer online, but luckily, still has a copy of the most recent import-b2evolution-wp2.php.txt available for download, along with all the older copies.

Although it should go without saying, now is a good time to backup your b2evolution database! Just in case…

While the script didn’t work right away, it was certainly a good start. I made a few tweaks to it and managed to get it working properly on my installation. You can download my copy here – make sure to remove the trailing .txt suffix after downloading.

I installed WordPress as usual, specifically WordPress 2.7 from the WordPress archives since I wanted a fairly old version. I configured it to use b2evolution’s database —  Wordpress uses different table names, so they don’t conflict with each other. Plus, the migration script expects this, so you don’t really have a choice.

Next, I uploaded the migration script to my WordPress wp-admin folder, then invoked it directly (e.g. http://yourblogaddress/wp-admin/import-b2evolution-wp2.php) and filled in the relevant values in the form presented.

It took me a couple of goes to get it right, so after the first failure, I installed the WordPress Reset plug-in; this makes it very easy to reset the WordPress database ready for another try, without having to do a full WordPress re-install, and without altering the b2evolution entries.

I highly recommend checking your database with phpAdmin afterwards to make sure the posts appear correct!

Even with the script, I still had to manually update the categories – my version of the script didn’t migrate them across properly. Since I only had 100 entries or so, it was easy enough to sort them on b2Evolution using phpAdmin. I could then select multiple posts by hand in WordPress and assign them to each category using the bulk update option.

(If I’d had many more posts, I might have spent some more effort getting the category migration working correctly.)

Finally, once I was confident everything was working okay, I updated WordPress from 2.7 to 4.1, which is MUCH nicer.

And all done!

Legacy URL support

Well, not quite done it turned out. There are plenty of links out there to my old b2evolution posts, and it would be nice if they could magically redirect to the new WordPress equivalent, to keep both the search engines and users happy.

This turns out to require a little .htaccess magic, and some PHP scripting. I added the following to WordPress’s .htaccess (I’ve reproduced the entire file for reference):

# BEGIN WordPress

RewriteEngine On
RewriteBase /blog/

# Check for references to the old b2evolution blog and send them
# to our redirect script where they'll be properly handled.
RewriteRule b2redirect.php - [L]
RewriteCond %{QUERY_STRING} ^(m=|.*cat=|.*blog=5|.*author=|pb=1|title=)
RewriteRule .* /blog/b2redirect.php [L,R=301]

# Normal WordPress rules
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]

# END WordPress

(Watch out for word wrap on the QUERY_STRING line – the bracketed items are part of the same line.)

Essentially, this says that any query strings passed in to the blog of the form m=xxx (date reference), cat=xxx (category reference), blog=5 (my old Blog’s internal ID), author=xxx (show author posts), title=xxx (title reference) or pb=xx (b2evolution specific) should be directed to my custom script b2redirect.php without further ado and everything else should be handled by WordPress as usual.

(We use a 301 Redirect to indicate to browsers and search engines that this is a permanent redirection, and the new URL should be used in future.)

I learnt a couple of useful things about mod_rewrite figuring this out. I hadn’t fully appreciated that RewriteRules can only match against physical disk filenames from the URL; if you need to match parameter names or values, you must use RewriteCond in conjunction with the QUERY_STRING parameter.

And of course, I got caught out by having the parameters in my redirected URL immediately trigger another redirect when the page was refetched, until eventually it gave up. This is why the very first rule says that references to b2redirect.php should be passed through without any rewriting at all.

So what is b2redirect.php? It’s a small script I wrote that interprets the old b2Evolution parameters and figures out a WordPress equivalent. Here it is:

// Redirect b2evolution blog URLs to WordPress
$baseurl = "";

$catmap = array();

$catmap[14] = "observation";
$catmap[15] = "technology";
$catmap[16] = "random-thoughts";
$catmap[17] = "networking";
$catmap[18] = "windows";
$catmap[19] = "rant";
$catmap[20] = "useful-links";

$title  = $_GET["title"];
$m      = $_GET["m"];
$cat    = $_GET["cat"];
$author = $_GET["author"];

// Set default URL
$url = "$baseurl/";

if (!empty($title) && !strpos($title, ":"))
$url = "$baseurl/$title";
else if (!empty($cat) && !empty($catmap[$cat]))
$url = "$baseurl/category/$catmap[$cat]";
else if (!empty($m) && (strlen($m) == 4 || strlen($m) == 6))
$year  = substr($m, 0, 4);
$month = substr($m, 4, 2);
if ($year >= 2005 && $year <= 2013)
$url = "$baseurl/$year/";
if (strlen($month) > 0)
$url .= "$month/";
else if (!empty($author))
$url = "$baseurl/author/eddy/";
// Now issue the permanent redirect to the new location
header("HTTP/1.1 301 Moved Permanently");
header("Location: $url");


Once again, the categories needed some special handling. Otherwise, it was straightforward – month references get changed to WordPress archive format (year/month); titles are mapped to the equivalent WordPress direct URL; category numbers go to the new WordPress equivalent name; author references show the WordPress author page; and everything else goes to the home page of the blog – better than a 404 Page Not Found.

So that’s that job done. Now let’s see what the next 10 years brings…

Windows 7 filesharing limit

I have a Sonos music system at home, and it’s great — I use it to make music stored on my Windows 7 Media Center system accessible throughout the house, among other things.

One of the nice things about Sonos is that it’s really easy to setup. I just point its music library at a network Share on my Media Center system and it automatically keeps everything indexed and up to date. Every now and again, however, I go to play a music track and I’m told it can’t find it. Specifically, it can’t access the file using the network share path, even though the file is there and I can play it fine locally on Media Center.

Recently, I finally figured out what was going on – a simple but not-very-well-publicised limitation built into Windows 7 Home Premium which restricts the resources used to manage network file shares. If you have a number of PCs in your house, as I do, all accessing shares on a particular PC, you can run out of resources. When this happens, any further attempts to access the share will fail. Not good!

Happily, there is a straightforward fix – Alan Lamielle describes it on his blog.

The short version is to find this registry key on the Windows 7 machine:

HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache

and change the following registry key from ‘1’ to ’3′:


Then restart the ‘Server’ service (or restart Windows itself if you prefer) and everything will be back to normal again.

Since making this change, I haven’t had a single recurrance of the problem – happy days!