Removing folders with a trailing space on NTFS volumes

At the moment im cleaning up a very poorly designed and implemented file server structure.

and before you say it – a large amount of data has been moved into teams/sharepoint/onedrive etc already – but the storage costs were getting excessive – so there is still plenty of data on prem.

One of the issues ive run into while cleaning up un-user DFS-R replicas is folders that have spaces at the end of the name, such as “D:\Sales\December ” for example – which NTFS does not support…. but seems to be something Mac users do regularly (for unknown reasons)

These folders cannot be deleted via the GUI.

Open an elevated command prompt and

rmdir /q “\\?\D:\Sales\December “

AADConnect – get Sync’ed and excluded OU’s via powershell

AADConnect has a JSON file and the ability to export – and there are also various AADConnect documenters out there… but sometimes you just want to get a core piece of info without having to start the GUI of wade through many pages of JSON.

Get-ADSyncConnector | select Name

Note the name of your “internal” domain as the connector (the one that doesn’t have “AAD” at the end)

(Get-ADSyncConnector -name <ConnectorName>).Partitions.ConnectorPartitionScope.ContainerInclusionList

(Get-ADSyncConnector -name <ConnectorName>).Partitions.ConnectorPartitionScope.ContainerExclusionList

Otherwise healthy DC failing DFS-R, pointing to DC that no longer exists

Today i had a DC that was otherwise healthy, but reporting error 4612 and 5012 in the DFS Replication log, specifically:

The DFS Replication service failed to communicate with partner <decommissioned DC name> for replication group Domain System Volume. The partner did not recognize the connection or the replication group configuration.

My first port of call was to open ADSIEdit.msc and check

CN=Topology,CN=Domain System Volume,CN=DFSR-GlobalSettings,CN=System,DC=domain,DC=com,DC=au

but the dead server was not in there.

After some googling, i found a reference to

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DFSR\Parameters\SysVols\Seeding SysVols\DomainName\Parent Computer

and sure enough – this was referencing the now decommissioned DC – no idea how it happened. The new/old DC’s were both online together for over a month – it should not have been still seeding…. but obviously something went wrong.

Updated the name to a DC that existed, restarted the DFS-R service, waited about 15 seconds – all is now good.

Moving off on HostGator

In some truly bizarre circumstances, my previous wordpress host – hostgator, suspended my account a few days after paying for another 3 years.

I suspect it had something to do with their “support” directing me to turn off automatic billing, actually cancelling the account instead of only turning off billing.

While frustrating – it would have been ok…. but the response by Hostgator support was beyond poor.

1) They claimed to have emailed me about the cancellation. Since i use a hotmail account, i can’t see the SMTP logs or spam logs to verify if they actually did or not…. but i can say that all their billing emails reach that account fine – but none of the messages they claim to have sent from support have ever arrived… and their response to this was “check your spam folder”… excellent point… never thought of that. Just insultingly basic.

2) They were unable to give me a reason for the cancellation… as above, i suspect it was an unintended consequence of turning off auto-renew… but surely they would be able to see that.

3) They were seemingly happy to cancel my account, but not refund my money… while repeatedly claiming they have refunded my money – and to “check with my bank” – whatever the fuck that means…. while I’m looking at my online banking and can see the money going out – but no money coming back in.

4) They have emailed me a site backup – which will arrive in 24 hours (and never did)… they will also get someone from their accounts team to email me within 24 hours about the refund (never happened). I managed to eventually get a cpanel backup (rather than the preference of a wordpress export) out of them via the online chat, which, fortunately only had corruption for stuff i didn’t need anymore…. and for the refund i have taken the approach of lodging a dispute via my bank – as its pretty clear to me that its intentional theft on hostgators part.

 

Anyhoo – long story short – i would not recommend hostgator.

Error: SWbemObjectEx: Invalid index when trying to update a NIC using SConfig on server core

When using SConfig on a server core install, i was getting the following error

had similar issues when trying to configure the NIC using powershell.

Thanks very much to Mike and his post @ https://mikeconjoice.wordpress.com/2017/01/24/windows-server-core-error-swbemobjectex-invalid-index/

for pointing out that it was because IPv6 was not bound to the adapter.

Using the following powershell worked for me

Enable-NetAdapterBinding -Name Ethernet –ComponentID ms_tcpip6

 

the other important thing here is that unbinding IPv6 from adapters is a relatively common and completely silly practice. It frequently causes issues and doesn’t even achieve the goal of properly disabling IPv6 on the machine.

If you want to disable IPv6 – do it properly – via the registry as per

https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/configure-ipv6-in-windows

LocationHKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters\
Name: DisabledComponents
Type: REG_DWORD
Min Value: 0x00 (default value)
Max Value: 0xFF (IPv6 disabled)

Panasonic air con and some stellar support

Email chain with panasonic support below…. asking for the totally unreasonable option to setting a timer for auto-off within the “comfort cloud” app. Long version below – short version, Panasonic seem to have some mind-blowingly unhelpful people working for them…. go with a different brand.


My initial request via web form

Support type: ContactUs(Feedback or suggestion)

Product details:

Product type: Air Conditioning and Ventilation

Product category: Split Systems – Wall Mounted

Product model: CS-Z42TKR / CU-Z42TKR

We have 2 x CS-Z42XKRW units (didn’t appear to be listed in the model drop-down) … and we would like to be able to set a timer within the comfort cloud app – but we cannot find this setting. Just to be clear, we don’t want a weekly schedule… we want to be able to turn the unit on – and then say “auto-turn off in 2 hours”… our existing panasonic ducted unit can do it (which granted is not controlled by comfort cloud) – but these new units don’t seem to be able to do it. What are we missing ?


Their 1st response ignoring i specifically mentioned i was not after a weekly schedule

Dear Hayes,

Thank you for your email.

As per the attached user manual page 16, the application only has a weekly for timer.


My attempt to try and get something… anything out of them.

So… no plans to add this? Anywhere I can request it ? Anything that actually helps at all ?


The Microsoft-support-level-bad response

Hi Hayes,

The comfort cloud app gives you the opportunity to switch your air conditioner off and on at you leisure from anywhere, anytime. It does not have a 2hr timer to automatically turn off your system.

The suggestion is that you set an alarm on your phone to prompt yourself to turn off the air conditioning system from your phone after 2 hrs. if that is your requirement.


So… there you have it…. there is no way to request a multi-billion $ company to add a basic function to their app….. and the response just indicates that the product is, like many things currently, effectively unsupported.

Issues with mailbox migration, multiple identities where tenant wide retention policies in use

This is a somewhat niche issue – which is why im documententing it here.

Scenario

  • Exchange migration to exchange online which i came into approx 75% of the way through – so i don’t have any history on why some things have happened (and there is no useful doco)
  • Tenant wide retention policies are in place for all data (legislative requirement im led to believe for this client)
  • Identity sync via AADConnect
  • Some mailboxes cannot be moved. Powershell error message from new-moverequest indicates that the identity is not unique

Investigation

  • Start off by looking at the AAD Object sync with
    • Connect-MSOLService
    • (Get-MsolUser -UserPrincipalName identity@goes.here.com).errors.errordetail.objecterrors.errorrecord| fl ErrorCode
    • The output, will likely look something like this:
      • The value “<guid-value>” of property “ArchiveGuid” is used by another recipient object. Please specify a unique value.
  • Next up, we want to have a look at the potential duplicate objects
    • Connect-ExchangeOnline
    • Get-recipient -identity <identity> -includesoftdeletedrecipients
      • This will likely show you 2 (or more) mail user objects
    • To confirm the soft-deleted mailuser object you can use
      • Get-MailUser -resultsize unlimited -SoftDeletedMailUser -Identity <identity> | fl *guid*
      • Notice the ArchiveGUID returned is the same as the ArchiveGUID from the Get-MSOLuser error retrieved earlier in the investigation
    • If you then try and run the obvious next step
      • Get-MailUser -Identity <identity> -SoftDeletedMailUser | Remove-MailUser
      • You will get an error similar to
        • Remove-MailUser: The operation couldn’t be performed because object ‘Soft Delete d Objects\<identity>’ couldn’t be found on ‘SYBPR01A004DC01.AUSPR01A004 .PROD.OUTLOOK.COM’

Now, i know what your thinking “just exclude the mailbox from the retention policy” – and there within lies the issue…. there is no mailbox, only a mail user object, but with an archive mailbox that has been retained by the retention policy after the primary mailbox has been removed. It is then, to my knowledge, impossible to exclude that archive mailbox from retention – as its associated with a mailuser – not a mailbox.

As to how these identities got into this state…. absolutely no idea. I wasn’t around for the earlier parts of the project – but given some other things i’ve seen at the client, standardisation and documentation appear to be frowned upon (which is why i’m getting out ASAP)

 

Solution

The unfortunate solution is to log a call with O365 support.

I included all of the above information in my original support request and was still asked to run a “get-mailbox”… i included all the info again (and again, and again over a teams call showed them the exact same errors and data that i sent them) – and eventually they got the point (took approx 15 business days) and sent it to an internal team, who deleted the objects

Unfortunately i cant post the case number for reference (as it would potentially identify the client) – but maybe pointing MS support to this article might speed the process for others (?). Ideally, there would be a way around this, without engaging support – but there is not as far as I’m aware as of June 2023.

Issue with manually created EXO inbound connector in hybrid environment

Working at a client whom are approx 75% of the way through their migration to exchange online – and there are some odd things im running into – so here’s one of them.

The scenario and issue

  • Exchange hybrid setup, with servers on prem and EXO active. Active mailboxes in both.
  • Mail flow from on prem to EXO shows the following:
    • Outbound SMTP logs shows the message being handed off correctly to EXO
    • Message tracking in EXO shows 3 copies of the message, all of which, when looking into the details are bounces
    • When looking in security.microsoft.com, the messages have been flagged as phishing attempts… with seemingly no way to flag them as not phishing attempts
  • The connectors on-prem looked ok, and after, double, triple and ninieteenth-thousandth checking, they were solid
  • The connectors in EXO were manually created (for reasons i don’t know that pre-date me) and the HCW created connectors had been disabled. No idea why.
  • The connectors in EXO looked fine and validated without any issue
  • After circling around for ages, i compared the disabled HCW connector with the active connect with “get-inboundconnector | fl”
  • This is when i noticed that the HCW created connector had IP’s in the “EFSkipIPs” property

The Fix

  • EFSkipIPs can be configured as per the powershell doco here
  • The EFSkipIPs property looks like it defines IP’s that should be excluded from enhanced filtering. Since the HCW automatically populates this field – most of us will never have to use this…. but if some bright spark decides that the HCW isn’t good enough for them (for whatever reason), then this becomes important.
  • Because i had the previous, disabled connector, created by the HCW – i already knew the IP’s i needed to add.  If you don’t have this, you will need get your the Public IP that is presented to EXO. This could be obtained with something such as www.whatsmyip.com
  • The multi-valued property… well, it would have been nice on the doco page if an example was included… so since there isn’t one in the official doc – here is an example below:

Set-inboundConnector -Identity “OrgToEXO” -EFSkipIPs @{Add=”xx.xx.xx.xx”, “xy.xy.xy.xy”}

  • After that, i needed to wait approx 15 minutes (not sure on the exact time, but it didn’t work straight away) – and bingo-bango – no more mail flow issue

Consolidating services into Azure

Recently I had an exceedingly poor experience with my external DNS provider, Namecheap. After they had some mail issues, their 2FA emails weren’t coming through…. I could see they weren’t even hitting O365… but of course, their support refused to acknowledge this – and went down a path of (bizarrely) insistently asking for a scan of government issued ID – very scammer like. This was enough to make me re-evaluate my external services and where they lived – with a specific view to bringing them into Azure…

 

Why bring all the services into O365/Azure ?

  • One provider… and MS are a provider that isn’t disappearing anytime soon. I can’t see us moving away from O365 in the foreseeable future – so if that service is anchored – why not move others towards it ?
  • Azure management interface and scripting are generally pretty good
  • MS support is generally terrible…. But they have never tried to get me to send a government issued photo ID. Community support around Azure/O365 varies greatly – but there are many great blog articles etc around.
  • Cost – MS partners can get Azure credit with some partnership options – some months I use it all – other months I don’t – so it makes sense to use as much of the credit as possible

 

DNS

DNS seemed like the easiest candidate and it was also the service that was about to expire on Namecheap.

I logged a call with O365 support, asking about transferring a DNS zone into O365/Azure… The guy was actually reasonably nice and tried to be helpful – but seemed to have it in his head that DNS was a website or something…. Anyway, the upshot of the conversation was “no, you can’t transfer in… you can only use O365 DNS if you purchased the domain from MS”

After this I went off did some searching and found the incredibly aptly named Azure DNS.

5 minutes later, it was all setup and ready to go

  • Go to the Azure portal
  • Create resource
  • Networking -> DNS Zone
  • Create
    • Select your subscription, resource group and zone name
  • Add your records

 

I tested the service before updating my registrar using

Nslookup <record name> ns1-02.azure-dns.com.

I then waited a few days – as I wanted to see how much the DNS zone would cost without usage (as Azure pricing pages are exceedingly difficult to decipher IMO) – and while this will obviously vary greatly for everyone – for my zone after 5 days (with no traffic mind you) – the cost for that service was a whopping $0.05.

 

Based on that, I updated my registrar to point to the Azure DNS servers, then ran an O365 check – just in case – and all was good.

 

Domain transfer

Given the above conversation, I thought it was unlikely, but quickly found these items via google

https://learn.microsoft.com/en-us/answers/questions/2168/how-can-i-transfer-a-domain-from-godaddy-to-azure

https://jrudlin.github.io/2018/10/27/domain-name-registration-transfer-to-azure-app-service-domains/

 

So it is possible – but is a bit of kludge… additionally, according to the first forum post at least – the ability to “transfer in” in on the MS radar

 

Given my domain registrations for my current domains does not run out until 2024 – I am going to wait until they are closer to expiry – then come back and see if MS have an officially supported method of transferring domain registration into O365/Azure.

 

WordPress

WordPress on Azure went GA in August 2022 – and you can find some details about it here – https://learn.microsoft.com/en-us/azure/app-service/quickstart-wordpress

 

Unfortunately, when going to https://portal.azure.com/#create/WordPress.WordPress – I am immediately presented with “MySQLFlexible server is not available for your selection of subscription and location”… changing location does nothing – so its something to do with my partner subscription…. Wouldn’t want partners to be able explore your product set and become more familiar with the wide range of Azure offerings…. (or write blog posts on how to use their products) – can’t have that! Geez MS licensing people make some whacky fucking decisions.

 

Static Websites

Last up was my company website, which is a static HTML website. After some google, I found there were a few methods, such as using an Azure storage account – but that seemed to have some limitations around certificate assignment and host headers (from reading other posts). The other main option appeared to be  Azure static web apps – which was a more complete offering, but also more complex. It required linkage to a GitHub or Azure DevOps account and asked me a bunch of questions that I had NFI about. Remember, I’m an infra nerd… so once it goes past PowerShell (or VBScript.. or JSON if I have to) – its all quantum realm magic to me.

Anyway, after some reading and making a few mistakes, the rough process is:

  • Create a GitHub account (I went GitHub – since I already had an account and some code in there)
  • Create a project in GitHub
  • Upload the static html/css site to the GitHub project
    • For whatever reason, only about 90% of the files uploaded first try – but there were no errors. I only found out some files were missing when I tried to use the published website. I’m a newbie to Github – so maybe I did something wrong – but its worth looking out for
  • Go to the Azure portal
  • Create resource
  • Search for “Static Web app”
  • Create
    • Select your subscription, resource group and name
    • Select your hosting plan…. Free is obviously a good place to start – you can always upgrade it later
    • Deployment details – I selected “GitHub”
    • Authorise the connection between the static web app and GitHub
  • The site will now be ready via the Azure URL – which is great for testing to make sure everything is correct
    • My site was ready fairly quickly – but a number of the images didn’t display.
    • I posted on a forum about this and eventually found that files within the Static web app are case sensitive… so my html referred to background.jpg… when the file was named Background.jpg…. I got rid of the capitalisation once I realised, and all was good.
  • Once everything is correct
    • Add your custom domain
      • Azure static web app -> custom domains
      • Add – custom domain on Azure DNS
      • Select your DNS zone from the drop down
      • In the domain name box, you must enter the FQDN… e.g. www.company.com, not just “www” (give that you select the zone in the other drop down – this is confusing)
      • Now – as per this bug – https://github.com/Azure/static-web-apps/issues/202 – I found I got the error “Failed to add custom domain to SWA with error message”… but the CName entry was still actually added… this was a start… but since it did not show up in “custom domains” – the site still did not work without that host header.
      • Due to this, I simply added it as a “custom domain” (even though the DNS was/is hosted in Azure DNS) – and it took a minute to validate, but worked fine

 

In summary

  • Azure DNS – easy
  • Azure static web sites easy-ish… but wasn’t clear that it was case sensitive and the adding of a custom domain seems very buggy
  • Not being able to transfer to MS as a domain registrar is a bizarre omission
  • Microsoft licensing people still make decisions by rolling a D20 inside a Zorba ball when drunk – this is unlikely to change in my lifetime
  • Run the fuck away from NameCheap

Server and SQL upgrades – lessons learned

Im just finishing up on a project where i was upgrading a bunch of servers from 2012 R2 to 2019 or 2022 (depending on what the associated app supported), including a bunch of SQL clusters.

I’ve always been SQL adjacent – working wit/upgrading/installing SQL for other products to utilise… so i have some incidental knowledge – but its not my core skill set.

Things of note from the upgrades were:

 

When performing an in-place OS upgrade – upgrade speed can be significantly increased if you remove old user profiles

Some of the servers i was upgrading had hundreds of profiles on them that had not been used for a year or more….. all servers had at least 20 “Account unknown” profiles 

 

SQL Error Logging

The best way to find the error log if any upgrade goes wrong is to look in the registry at

KEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\<instance/version>\MSSQLServer\Parameters\

You can then copy/paste the path to the error log and get some helpful errors out

 

SSISDB is the bane of SQL cluster upgrades

SQL 2014 and below don’t support replicating SSISDB via AAG, so before you service pack, this DB must be removed from the AAG replication and the passive nodes have the DB deleted.

SQL 2016 and above support replicating SSISDB – so service packs can be applied without having to remove SSISDB from anywhere

All SQL upgrades (e.g. SQL 2014 or 2016 to SQL 2019) do not allow SSISDB to be part of an AAG – so SSISDB must be removed from the replication group and have the copy on the passive nodes deleted first.

If you forget this, you will likely see an error message similar to 

Script level upgrade for database ‘master’ failed because upgrade step ‘SSIS_hotfix_install.sql’ encountered error 15151, state 1, severity 16

 

Starting SQL to fix issues

So – you have run into an issue with the upgrade, as, for example, SSISDB was still replicated….. but now you cant start the SQL service to delete it

This is where /T902 comes in handy

  • Get the short name of your SQL service (from services.msc)
  • open a elevated command prompt
  • net start MSSQL$Instancename /T902

You can then do what you need to the SQL configuration.

 

Reporting services

Reporting services in 2017 and above is not a straight upgrade from 2016 and below. There’s plenty of articles around the web on the upgrade process – but…..

 

During inventory, make sure your discover SSISDB and Reporting services instances

In hindsight, one of the things i would have focused on more in my pre-upgrade inventory script was to identify SSISDB and reporting services instances.

Many of these in the recent project were present but not actually needed/in-use and could just be uninstalled.

 

Cluster rolling upgrades

This is well documented – but just to make it nice and short (the MS doco makes it seem harder than it is)

  • Ensure SQL AAG and cluster resource active node is node “X”
  • Ensure failover is set to manual
  • Verify SQL AAG is healthy and all databases are sync’ed
  • Service pack the current version of SQL – so i will support server 2019
  • Node Y – Upgrade 2012R2 to 2016 – Check node is still able to join cluster
  • Node Z – Upgrade 2012R2 to 2016 – Check node is still able to join cluster
  • Node X – Failover SQL AAG and cluster core resources to another node (e.g. Node Z)
  • Node X – Upgrade 2012R2 to 2016 – Check node is still able to join cluster
  • Upgrade cluster functional level
  • Node X – Upgrade 2016 to 2019 – Check node is still able to join cluster
  • Verify SQL AAG is healthy and all databases are sync’ed
  • Node X – Upgrade SQL 20xx to SQL 2019 with current CU
  • Node X – Failover SQL AAG and cluster core resources back to node Z
    • Once you do this – you will not be able to fail over to other nodes until they are also upgraded. Replication will also stop to “lower” version nodes – don’t freak out when you see this (like i did on my first upgrade!)
  • Node Y – Upgrade 2016 to 2019 – Check node is still able to join cluster
  • Node Y – Upgrade SQL 20xx to SQL 2019 with current CU
  • Node Z – Upgrade 2016 to 2019 – Check node is still able to join cluster
  • Node Z – Upgrade SQL 20xx to SQL 2019 with current CU
  • Upgrade cluster functional level
  • On each database on Node Y an Node Z, you will need to go into SQL management studio and select “resume data movement” – this tells SQL to try again – which will now work – as the same version of SQL is in use across the cluster