Category Archives: VMware

Mac SMB can’t connect to Server 2016 (File Server) Microsoft Failover Clustering Services

We ran into an issue with Macs connecting to our file services while attempting an upgrade on a Microsoft Clustering Services File Services on Server 2012R2.

Current Environment:

Two 2012R2 Servers/Two 2016 Servers, with the following Roles/Features Installed:

ROLES – File and Storage Services:

  • File Server
  • DFS Namespaces
  • DFS Replication
  • File Server Resource Manager

FEATURES

  • Failover Clustering

Testing:

Two Virtual Machines running 2012R2, with Microsoft Clustering Services, with multiple File Server Roles. Everything works with the Macs connecting to these Clustered File Services while running 2012R2. The cluster level is also 2012R2.

However, after adding a 2016 Server into this Microsoft Cluster, and failing over one of the file server roles to it, the Macs can no longer connect to that file server. They receive a message stating:

There was a problem connecting to the server “”. Check the server name or IP address, and then try again. If you continue to have problems, contact your system administrator.

If you migrate the file server role back to a server running 2012R2 the Mac can once again connect.

Resolution:

I plan to come back to this blog to post a more detailed writeup. I was passed on a lot of information that I haven’t seen but I will try to best to explain what I believe is happening.

When a 2016 Server is added to a 2012R2 only cluster, the cluster moves into “Mixed Mode” to allow both Operating Systems to function. Now Microsoft states you should not stay in this mode very long, from what I’ve seen thrown around no more than 4 weeks.

This is hearsay from packet captures but when a Mac tries to connect to the File Services running on 2016 Server while in mixed mode it supposedly connects on SMB 3.1.1, but then something in the network stack wants to downgrade the connection to SMB 2.0, and the Macs cannot follow it and therefore cannot connect to the server.

However, after removing the 2012R2 servers, and then upgrading the Cluster Level to 2016, the Macs can then connect again.

I’m still doing some troubleshooting and this post will be updated.

CISA, VMware, and Mandiant, Oh My!

CISA released an alert yesterday regarding VMware’s recommendations for threat hunting and securing your VMware environments from Malware due to Mandiant’s report. (https://www.cisa.gov/uscert/ncas/current-activity/2022/09/29/vmware-releases-guidance-virtualpita-virtualpie-and-virtualgate)

Mandiant released a blog yesterday on “Investigating Novel Malware Persistence Within ESXi Hypervisors” (https://www.mandiant.com/resources/blog/esxi-hypervisors-malware-persistence)

So what does this all mean for you?

First, don’t go running down the street with your hands in the air as Mandiant has not uncovered any vulnerabilities that were exploited to gain access to ESXi. Threat actors would still need the proper rights (root) on ESXi to install backdoor VIBs. However, since many people use central authentication systems like Active Directory though, it may be easier for threat actors to pivot into your environment if Active Directory is compromised.

The CISA link above provides all of VMware’s important links to make sure you are secured as possible. I’d highly recommend reading through all of the material here that VMware has put out.

The best thing you can do is setup Defense in Depth.

Changing vCenter Authentication [AD over LDAP(s)]

**EDIT** If you log into vcenter with an Active Directory account you should be able to modify an already existing Identity Source. I had been logging in with local administrator account.

For reference we already had our linked vCenter talking to Active Directory over LDAPS. However, we are currently in the process of migrating all of our VMs over to new hardware. When we tried to move the main Active Directory server providing authentication to vCenter, lets just say it was not happy.

Upon trying to enter into the Identity Sources and update the server(s) manually on the Identity Source that was already being used we received the following message: “Check the network settings and make sure you have network access to the identity source”.

It was not found until after doing some Googling that you have to remove your current running Identity Source in order to make changes. In other words delete the current identity source and add a “new” one in order to make the changes you want to make.

This just seems bad.

However, after doing a lot of testing in our TEST environment I could not seem to run into any snags. If you login with administrator@vsphere.local and delete and then immediately re-add the identity source back with the same domain name, alias, etc, there does not seem to be any issues. All of your permissions on objects defined with AD groups will remain.

I used the method listed in this VMware KB for grabbing the certificates I needed for both the Primary and Secondary Active Directory Servers. (https://kb.vmware.com/s/article/2041378).

Server 2012R2 in place upgrade to Server 2019 on VMware

I’m personally not a fan of in place Microsoft Server upgrades but I suppose they have their time and place.

Since many of our 2012R2 servers are from the 5.1 and 5.5 days of VMware many of them are still running Virtual Hardware v9. This hardware version needs to be upgraded to perform the OS upgrade.

I was able to successfully re-create the issue with an upgrade of a clean 2012R2 install on v9 hardware. After the first reboot you will get stuck at the black screen with blue window, with no circle running underneath. I let this run for two full days (48 hours) before cancelling it.

After cancelling it and resetting the VM, you will be given the following error message:

We couldn’t install Windows Server 2019

We’ve set your PC back to the way it was right before you started installing Windows Server 2019.

0xC1900101 – 0x20017

The installation failed in the SAFE_OS phase with an error during BOOT operation

VMware generally states that you shouldn’t upgrade the VM hardware version unless there is a need. In this case there is a need.

My recommendations would be to do the following:

  1. Shut down the VM you want to perform an in place upgrade on
  2. Take a snapshot with the VM off
  3. Upgrade the Virtual Machine hardware version (We went to v15)
  4. Power on the VM, mount the ISO, run the upgrade

This process seems to be working for us, and although this may be a no-brainer, I’m putting it out there for the search engines to index in case it does help someone.

Kali Linux on Intel Macbook Pro 16″ with VMware Fusion 12.1.2

I have been struggling to figure out why Kali Linux would not update after a fresh install on VMware Fusion, virtualized on my Intel Macbook Pro 16″ laptop.

I was either receiving one of these error messages when trying to perform a “sudo apt update” on a fresh install:

The following signatures were invalid: BADSIG ED444FF07D8D0BF6 Kali Linux Repository <devel@kali.org>

OR:

apt-get updateGet:1 http://kali.mirror.garr.it/mirrors/kalikali-rolling InRelease [30.5 kB]Get:2 http://kali.mirror.garr.it/mirrors/kalikali-rolling/contribSources [66.1 kB]Get:3 http://kali.mirror.garr.it/mirrors/kalikali-rolling/non-freeSources [124 kBGet:4 http://kali.mirror.garr.it/mirrors/kalikali-rolling/mainSources [11.0 MB]Get:4 http://kali.mirror.garr.it/mirrors/kalikali-rolling/mainSources [11.0 MB]Err:4 http://kali.mirror.garr.it/mirrors/kalikali-rolling/mainSources

Hash Sum mismatchHashes of expected file:- Filesize:11015732 [weak]- SHA256:b20b6264d4bd5200e6e3cf319df56bd7fea9b2ff5c9dbd44f3e7e530a6e6b9e0- SHA1:2d8b15ab8109d678fe1810800e0be8ce3be87201 [weak]- MD5Sum:d0b5f94ba474b31f00f8911ac78258ec [weak]

Hashes of received file:- SHA256:a7b9ca82fc1a400b2e81b2ebc938542abfdbfa5aecdfa8744f60571746ec967b- SHA1:5d870530aa87398dcb11ecb07e6a25ca0746985f [weak]- MD5Sum:9a4824220c0a5fa6cb74390851116b73 [weak]- Filesize:9828918 [weak]

There seems to be an issue within VMware Fusion with the network management, trying to share a WiFi connection. I’ve read on some forums that people have had luck with sharing the connection instead of bridging it. If I try to share the connection I lose internet on my Kali VM.

The only way I can keep a connection is to bridge the connection, which gives me an IP off my wireless and lets me browse the Internet but something is being done to the traffic when trying to update which causes some security issues.

My current work around was to plug in another USB WiFI adapter and pass it through to the VM and let the VM use it to connect to my wireless in order to get out.

This only appears to be an issue when installing or updating software and I’m not quite sure what the network stack is doing underneath. When I have more time I hope to dig into this further..

VMware vCenter 6.7 Certificate Status Error

After rebooting our vCenter appliance we noticed an error on vCenter regarding “Certificate Status”

After going to the Administration snap-in and clicking on “Certificate Management” and logging in to verify certificates we saw nothing out of order. All the VMware provided certificates were fine. I decided to keep digging.

I started googling and found the following command listed on Reddit by zwamkat.
https://www.reddit.com/r/vmware/comments/it4dmq/vcsa_certificate_status_alarm_triggered/

for i in $(/usr/lib/vmware-vmafd/bin/vecs-cli store list); do echo STORE $i; /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store $i --text | egrep "Alias|Not After"; done

This provided the output necessary to see all certificates on the vCenter appliance, including third-party certificates. We noticed that we still had a thirty party certificate listed in vCenter with an expiration date coming up even though we already replaced it.

We are following up with the third-party vendor to get to a resolution.