So… I just had a nice week troubleshooting a Horizon 7 SAML issue with VMware Identity Manager SAAS / 1903 with GSS. Finally got it fixed with some good old log digging.
The environment I am working on is based on:
- Horizon 7.8 (2 PODS, each pod has 2 connection servers)
- VMware Identity Manager 1903 (SAAS)
- VMware Identity Manager Connector 1903 in HA on Windows (AD sync and Horizon sync to SAAS)
- UAG 3.5
Upgraded from Horizon 7.7 and downgraded vIDM Connector to 3.3 to test if that was an issue but that did not matter at all…
The problem:
The problem was that no matter how I configured the vIDM tenant I kept on getting a SAML error. This Horizon Server Expects to get your logon credentials from another application or server.
Have a multi-pod environment and have done this a million times before but for the love of… I could not get it working in this environment… Connection servers added as expected, could sync my desktops into vIDM. All desktop entitlements were there but just logging into a desktop would not work… The whole Horizon environment works just fine with the sub DNS records. Just not in conjunction with vIDM.
Normally this issue arises when:
- Time sync is off between the vIDM connector and Connection Servers.
- Certificates are wrong.
- Domain trusts not correct.
Log file locations:
VMware Identity Manager Connector:
C:\VMware\VMwareIdentityManager\Connector\opt\vmware\horizon\workspace\logs
- Connector.log
Horizon 7.8:
C:\ProgramData\VMware\VDM\logs
- Debug.log
In the Connector.log you will see if the sync with the connector servers is correct. Note that when using UAG’s the external URLs are not used.
name = E01CS01, serverAddress = https://e01cs01.dc01.domain.lan:443, enabled = true, tags = null, externalURL = https://e01cs01.dc01.domain.lan:443, externalPCoIPURL = 192.168.11.101:4172, auxillaryExternalPCoIPIPv4Address = null, externalAppblastURL = https://e01cs01.dc01.domain.lan:443, bypassTunnel = true, bypassPCoIPGateway = true, bypassAppBlastGateway = true, version = 7.8.0-12637483
In the debug.log of one of the connection servers I saw:
2019-06-24T13:08:46.695+02:00 DEBUG (13B8-1BE4) [EventLogger] (SESSION:0748_***_bce6) Error_Event:[BROKER_USER_AUTHFAILED_SAML_ACCESS_DENIED] "SAML access denied because of invalid assertion/artifact": Node=e01cs01.domain.lan, Severity=AUDIT_FAIL, Time=Mon Jun 24 13:08:46 CEST 2019, Module=Broker, Source=com.vmware.vdi.broker.filters.SamlAuthFilter, Acknowledged=true 2019-06-24T13:08:46.696+02:00 ERROR (13B8-1BE4) [ProperoAuthFilter] (SESSION:0748_***_bce6) Error performing authentication: Enabled SAML Authenticator's Issuer/entityId not matched with SAML Artifact com.vmware.vdi.broker.filters.FatalAuthException: Enabled SAML Authenticator's Issuer/entityId not matched with SAML Artifact
And in the event viewer of windows:
The description for Event ID 104 from source VMware View cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: BROKER_USER_AUTHFAILED_SAML_ACCESS_DENIED SAML access denied because of invalid assertion/artifact Attributes: Node=e01cs01.domain.lan Severity=AUDIT_FAIL Time=Mon Jun 24 13:08:46 CEST 2019 Module=Broker Source=com.vmware.vdi.broker.filters.SamlAuthFilter Acknowledged=true The specified resource type cannot be found in the image file
This was making me think… vIDM/Horizon is using the node name (connection server domain joined name) e01cs01.domain.lan as a SAML artifact. However, all my machines are pointed to another record in DNS… e01cs01.dc01.domain.lan.
So adding a POD via e01cs01.dc01.domain.lan to vIDM seems legit… Everything syncs and looks to work fine except the SAML. Reconfiguring vIDM to add the PODS with the domain-joined DNS name (e01cs01.domain.lan) and SAML works! So the lesson learned here is that one of the SAML artifacts used is the Connection server domain joined machine name…
All the logs and data is sent to GSS for a support call but VMware did not come to this conclusion yet but always create a case with GSS if you run into an issue even if you fix it your self. The case will now be closed, fixed by my self 🙂
Hi LaurensvanDuijn,
I am facing the same issue. I couldn’t understand the fix that you mentioned.
My Setup
Connection Server – mycs1.dc.local
Identity Manager – myvidm.dc.com.ph
vIDM Connector – myvidmcon.dc.local
Please find the event logs from the connection server:
The description for Event ID 104 from source VMware View cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
BROKER_USER_AUTHFAILED_SAML_ACCESS_DENIED
SAML access denied because of invalid assertion/artifact
Attributes:
Node=MYCS1.dc.local
Severity=AUDIT_FAIL
Time=Wed Jul 17 17:59:13 SGT 2019
Module=Broker
Source=com.vmware.vdi.broker.filters.SamlAuthFilter
Acknowledged=true
The specified resource type cannot be found in the image file
Okay.
My issue got fixed. It was due to time sync.
Updated all the VMs to time sync with the host. As I am already syncing host time with NTP.
Yea that was going to be my first guess too. Time. Glad you fixed it!