Pet insurance australia – just shit…

Dogs…. just fluffy balls of awesomeness right ?

Just like we have health insurance, i got pet insurance for our first Golden Retriever – who turned 11 a last month, through Pet Insurance Australia… as they seemed to be ok-ish based on the online reviews… acknowledging that its incredibly difficult to discern a real review from a bot-farm review anymore.

He’s had a full life of playing with other dogs (his favourite), his little human, his therapy dog work and the rest of our family… like most goldens, he’s pretty much universally loved… because he’s fucking awesome and might well be the nicest creature on the planet – ever.

All the way back in 2016, i got pet insurance for him because – risk and risk mitigation. At the time it was around the $500 a year mark.

Fast forward to yesterday (July 2024) – the premiums are now approx $2200 for the upcoming renewal. One one hand, i understand inflation and that his risk profile has changed now he’s older… on the other – isn’t that what i paid premiums for the last 8 years to help cover ?

When i rang to cancel the policy, i got the same old bullshit, including an offer to give us 3 months free… which really sealed the deal for me. If you can offer 3 months for free, then you’re just price gouging (like most corporates at the moment, i’m not saying this is isolated) rather than increasing prices in line with inflation.

Fuck you Pet Insurance Australia…. there aren’t many sacred things left in the world – but the health of doggies everywhere is one of them – you don’t fuck with that…. may you all get bowel cancer and die a long, incredibly painful death.

Moving from Synology to QNAP

My Synology 2413+ 12 bay NAS recently died after 12 years of service.

This NAS was primarily used as:

  • an iSCSI backup target for Veeam
  • Video recording for home security cameras
  • Media storage

Overall, i was pretty happy with the unit itself – but as per most companies these days, support was non-existent…. so when i did run into an issue, i was on my own.

Due to that, and Synology not being able to answer what would happen with my surveillance station licenses, i made the decision to go for a QNAP as:

  • It was a little cheaper for better hardware specs (this is in the 8-bay desktop model i was looking at – may be different for other models)
  • QVRPro – the equivalent of Synology surveillance station is free for up to 8 cameras – and i only use 4. There is apparently a 14 day retention time on video at the “free” license level…. and while i would prefer it to be 31 days…. its going to be fine most of the time.

In the ways im interested in, the QNAP has so far proven to be quite good, its setup and joining to an AD domain was simple and painless, adding disks, storage pools and volumes was easy and clear, QVRPro setup had very minor hiccups (more due to my understanding than the software)… but, it hasn’t been all great. The issues i have noticed so far:

  • The lack of a Synology Hybrid RAID equivalent isn’t a disaster, but disappointing…
  • Due to the above, i have purchased some more 8TB disks (previously had a mix of 6TB and 8TB) – the time taken to expand/repair is significant (as expected) – but the poor thing has been the performance of the device while this is occuring. Trying to stream anything during this process has been pointless – with constant dropouts. Having the performance degrade during a repair or expand is not unexpected – but not to the point of drop-outs.

Will be interested to see the performance difference once the rebuild has finished.

Win RM fails on DC with event ID 142

For a while i have had a niggling issue where on a DC that is used by a number of in-house coded applications, WinRM would fail intermittently with the following:

Log : Microsoft-Windows-WinRM/Operational

EventID : 142

Event Message: WSMan operation Enumeration failed, error code 2150859046

There isn’t much to go on for this error when googling – and MS support – well… no point in trying that.

After verifying permissions and configuration, checking server resources etc… i was at a point where i didnt know how to “fix” it or even have any leads.

I initially put in a simply script to restart the service nightly… but every now and again, the stop of the service would hang…. so i’d have to kill the process.

I’ve ended up going down a path of:

  • Attaching a scheduled task to eventID 142
  • To get around powershell restrictions – have it launch a batch file containing

reg add HKLM\SOFTWARE\Policies\Microsoft\Windows\PowerShell /v ExecutionPolicy /t REG_SZ /d unrestricted /f
powershell.exe -NoProfile -NoLogo -NonInteractive -ExecutionPolicy Unrestricted -File C:\data\TerminateAndRestartWinRM.ps1
reg add HKLM\SOFTWARE\Policies\Microsoft\Windows\PowerShell /v ExecutionPolicy /t REG_SZ /d AllSigned /f

TerminateAndRestartWinRM.ps1 contains

Start-Transcript C:\Data\WinRMTerminate.log

write-host “Getting the WinRM ProcessID”
$winRMService = Get-WmiObject -Class Win32_Service -Filter “Name=’WinRM'”
$processId = $winRMService.ProcessId

write-host “Terminating processID: $ProcessId”
Stop-Process -Id $processId -Force

write-host “Sleeping for 10 seconds to wait for process to terminate”
Start-Sleep -seconds 10

write-host “Starting WinRM”
# Start the WinRM service
Start-Service -Name WinRM



Not the best thing ever – and i generally don’t like these types of “hacky” solutions…. but given that MS has moved from “mostly unsupported” to “completely unsupported” for everything that isn’t in Azure…. (which even then is mostly unsupported)… we don’t have much choice anymore.

AlwaysON VPN breaks after root certificate update


  • After updating the internal CA root certificate, AlwaysOn VPN stops working with an error (at the user end) of “A Certificate could not be found that can be used with this Extensible Authentication Protocol
  • In this case, we were using an Enterprise integrated CA and renewed the root using the same signing keys – which should ease the process – at least for all windows clients
  • AOVPN is configured to use PEAP for authentication



  • Initially, 4 out of the 6 AOVPN servers had not received the new root cert from a GPupdate yet – so i forced that, restarted the service, but no difference
  • We discovered that the issue only occured on devices which had the updated trusted root cert in trusted root store. Additionally, for those that had updated, if we deleted the updated trusted root cert, AOVPN would connect again
  • We quickly found this article by the doyen of DirectAccess and AOVPN –  
    • While its a good article – it ended up not being our issue and actually led our down the wrong path a little
    • At the same time, for someone that wasn’t overly familiar with AOVPN (This was implemented by someone else and i’ve not had much to do with AOVPN) it was great, because i could look at the scripts and suss out some of the relevant powershell commandlets
  • After checking and re-checking every setting under the sun, a colleague could connect again after updating the client end
  • Once she worked that out, we then clarified and replicated the change on a different machine to be sure – and confirmed it was all good



  • On a client machine, we updated the AOVPN configuration to include (i.e. tick the new as well as the old root cert) the updated root cert in 3 places under
    • <AOVPN connection name> / Properties / Security / Properties
    • <AOVPN connection name> / Properties / Security / Properties /Configure
    • <AOVPN connection name> / Properties / Security / Properties /Configure / Advanced
  • Confirm that the AOVPN connection is working
  • Export the profile using the script from
  • Look at the xml – you should now see the thumbprints of both the “old” and “new” root certificate listed in multiple sections
  • Copy the section <EAPHostConfig> from its open xml tag to its close xml tag and insert into the “EAP xml” part of intune AOVPN configuration

Documenting AD ACL’s

A while ago i joined an organisation whose MS estate was in need of a significant amount of love, time and effort. Getting them off of 2012 R2 DC’s and onto 2022 DC’s, upgrading the forest/domain functional levels and getting replication times down were the obvious first jobs… but once they were done – there was so many other things to do – it was hard to know what to go with first. So… i made a start on all of it at once – knowing that it would probably take all year to get the AD into a semblance of decent condition.

The more i looked, the more i found… one thing that was/is particularly disturbing is that the DS ACL’s have been fucked with at the top level – and flowed down to all descendant objects for some admin accounts, service accounts etc…. stuff that clearly doesnt need, or has never needed that level of access….

Before changing anything, the goal is to document the permissions – as is a spaghetti of inherited, non-inherited and multi-nested groups applied at many different levels…. resulting in one severe head-fuck for anyone trying to do anything effective with permissions delegation.

First of all i tried

A decent solution – which works perfectly in my test environment, but in the prod environment with thousands of OU’s and a stupid level of excessive custom permissions, uses approx 4GB of memory before dying consistently. So while this is definitely a good script – it just doesn’t work in this prod environment…. and that’s because of how fucked the environment is, not because the script is bad.

I moved on and found

Which seems to be an exceedingly nice (powershell based) AD ACL solution…. an optional GUI, plenty of configuration options and great output options – a really good solution.

For me – i needed to tick “inherited permissions”… as it is important for me to demonstrate how incredibly stupid (in case you haven’t noticed, I’m still flabbergasted that someone would do this….) it is to allocate permissions at the top level of a domain – along with having complete documentation.


Well done & thanks to the author – Robin Granberg – for creating a genuinely awesome tool.


Now comes the hard bit – removing the permissions without breaking anything.

Removing folders with a trailing space on NTFS volumes

At the moment im cleaning up a very poorly designed and implemented file server structure.

and before you say it – a large amount of data has been moved into teams/sharepoint/onedrive etc already – but the storage costs were getting excessive – so there is still plenty of data on prem.

One of the issues ive run into while cleaning up un-user DFS-R replicas is folders that have spaces at the end of the name, such as “D:\Sales\December ” for example – which NTFS does not support…. but seems to be something Mac users do regularly (for unknown reasons)

These folders cannot be deleted via the GUI.

Open an elevated command prompt and

rmdir /q “\\?\D:\Sales\December “

AADConnect – get Sync’ed and excluded OU’s via powershell

AADConnect has a JSON file and the ability to export – and there are also various AADConnect documenters out there… but sometimes you just want to get a core piece of info without having to start the GUI of wade through many pages of JSON.

Get-ADSyncConnector | select Name

Note the name of your “internal” domain as the connector (the one that doesn’t have “AAD” at the end)

(Get-ADSyncConnector -name <ConnectorName>).Partitions.ConnectorPartitionScope.ContainerInclusionList

(Get-ADSyncConnector -name <ConnectorName>).Partitions.ConnectorPartitionScope.ContainerExclusionList

Otherwise healthy DC failing DFS-R, pointing to DC that no longer exists

Today i had a DC that was otherwise healthy, but reporting error 4612 and 5012 in the DFS Replication log, specifically:

The DFS Replication service failed to communicate with partner <decommissioned DC name> for replication group Domain System Volume. The partner did not recognize the connection or the replication group configuration.

My first port of call was to open ADSIEdit.msc and check

CN=Topology,CN=Domain System Volume,CN=DFSR-GlobalSettings,CN=System,DC=domain,DC=com,DC=au

but the dead server was not in there.

After some googling, i found a reference to

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DFSR\Parameters\SysVols\Seeding SysVols\DomainName\Parent Computer

and sure enough – this was referencing the now decommissioned DC – no idea how it happened. The new/old DC’s were both online together for over a month – it should not have been still seeding…. but obviously something went wrong.

Updated the name to a DC that existed, restarted the DFS-R service, waited about 15 seconds – all is now good.

Moving off on HostGator

In some truly bizarre circumstances, my previous wordpress host – hostgator, suspended my account a few days after paying for another 3 years.

I suspect it had something to do with their “support” directing me to turn off automatic billing, actually cancelling the account instead of only turning off billing.

While frustrating – it would have been ok…. but the response by Hostgator support was beyond poor.

1) They claimed to have emailed me about the cancellation. Since i use a hotmail account, i can’t see the SMTP logs or spam logs to verify if they actually did or not…. but i can say that all their billing emails reach that account fine – but none of the messages they claim to have sent from support have ever arrived… and their response to this was “check your spam folder”… excellent point… never thought of that. Just insultingly basic.

2) They were unable to give me a reason for the cancellation… as above, i suspect it was an unintended consequence of turning off auto-renew… but surely they would be able to see that.

3) They were seemingly happy to cancel my account, but not refund my money… while repeatedly claiming they have refunded my money – and to “check with my bank” – whatever the fuck that means…. while I’m looking at my online banking and can see the money going out – but no money coming back in.

4) They have emailed me a site backup – which will arrive in 24 hours (and never did)… they will also get someone from their accounts team to email me within 24 hours about the refund (never happened). I managed to eventually get a cpanel backup (rather than the preference of a wordpress export) out of them via the online chat, which, fortunately only had corruption for stuff i didn’t need anymore…. and for the refund i have taken the approach of lodging a dispute via my bank – as its pretty clear to me that its intentional theft on hostgators part.


Anyhoo – long story short – i would not recommend hostgator.

Error: SWbemObjectEx: Invalid index when trying to update a NIC using SConfig on server core

When using SConfig on a server core install, i was getting the following error

had similar issues when trying to configure the NIC using powershell.

Thanks very much to Mike and his post @

for pointing out that it was because IPv6 was not bound to the adapter.

Using the following powershell worked for me

Enable-NetAdapterBinding -Name Ethernet –ComponentID ms_tcpip6


the other important thing here is that unbinding IPv6 from adapters is a relatively common and completely silly practice. It frequently causes issues and doesn’t even achieve the goal of properly disabling IPv6 on the machine.

If you want to disable IPv6 – do it properly – via the registry as per

Name: DisabledComponents
Min Value: 0x00 (default value)
Max Value: 0xFF (IPv6 disabled)