Skip to content

Commit 1613f0c

Browse files
committed
Added more rules and triage guides gear towards microsoft environment
1 parent 34a61a4 commit 1613f0c

19 files changed

Lines changed: 892 additions & 0 deletions

content/triage-guides/sentinel/collection/graph-mail-search-keyword-burst.md

Whitespace-only changes.
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Quick Assist Followed by Batch or PowerShell Execution
2+
3+
## Goal
4+
Identify suspicious use of Quick Assist followed by batch or PowerShell execution, which may indicate social-engineering-driven remote access abuse.
5+
6+
## Why This Alert Matters
7+
Quick Assist is a legitimate Microsoft support tool, but it is increasingly abused by attackers posing as helpdesk or IT support. Script execution after the session starts is a strong sign that the session may be malicious.
8+
9+
## What the Detection Is Looking For
10+
This detection looks for:
11+
- Quick Assist execution
12+
- followed by batch script or PowerShell activity
13+
- with suspicious command-line indicators
14+
15+
## Initial Triage Questions
16+
1. Did the user request support?
17+
2. Was Quick Assist approved or expected?
18+
3. What script or batch file executed afterward?
19+
4. Did the session lead to downloads, persistence, or credential abuse?
20+
21+
## Key Evidence To Review
22+
- Quick Assist process start
23+
- follow-on script command lines
24+
- helpdesk records
25+
- downloaded files
26+
- persistence and remote tool activity
27+
28+
## Investigation Steps
29+
1. Confirm whether the Quick Assist session was legitimate.
30+
2. Review the batch or PowerShell command launched after the session.
31+
3. Determine whether the command downloaded content or altered the system.
32+
4. Check for RMM installation, persistence, or credential access after the session.
33+
5. Validate the user story and whether the operator claimed to be support.
34+
35+
## Common Benign Explanations
36+
- legitimate IT remediation
37+
- approved remote support
38+
- scripted support diagnostics
39+
40+
## Escalate When
41+
Escalate if:
42+
- the user did not request help
43+
- the script is suspicious or obfuscated
44+
- malicious downloads or persistence follow
45+
- the activity aligns with known social-engineering patterns
46+
47+
## Suggested Response Actions
48+
- terminate the session
49+
- isolate the endpoint if compromise is suspected
50+
- preserve scripts and command lines
51+
- review other endpoints for similar Quick Assist chains
52+
53+
## Analyst Notes
54+
This should be treated seriously when Quick Assist is not common in your environment.

content/triage-guides/sentinel/credential_access/device-code-phishing-followed-by-graph-mail-access.md

Whitespace-only changes.
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# OAuth Redirection Abuse Followed by Browser Download
2+
3+
## Goal
4+
Identify suspicious OAuth phishing links that are followed by browser-driven download or payload execution.
5+
6+
## Why This Alert Matters
7+
This pattern suggests the user clicked a Microsoft-branded auth link and was then redirected into malware delivery or scripted execution. It bridges phishing into endpoint compromise.
8+
9+
## What the Detection Is Looking For
10+
This detection looks for:
11+
- suspicious OAuth authorization URL click activity
12+
- followed by browser-related download or execution
13+
- using installers or LOLBins
14+
15+
## Initial Triage Questions
16+
1. What file or payload was downloaded?
17+
2. Was the user redirected to an attacker-controlled site?
18+
3. Did the payload launch through a browser, installer, or script host?
19+
4. Did the user report seeing a fake Microsoft auth page?
20+
21+
## Key Evidence To Review
22+
- clicked URL and redirect chain
23+
- endpoint process creation
24+
- browser download history
25+
- downloaded file names and hashes
26+
- process ancestry and execution timing
27+
28+
## Investigation Steps
29+
1. Review the clicked OAuth link and determine final destination.
30+
2. Identify what was downloaded or launched.
31+
3. Check whether the payload executed via PowerShell, CMD, MSHTA, MSIExec, or Rundll32.
32+
4. Determine whether the user was prompted to approve anything or just click through.
33+
5. Review for persistence, RMM, credential theft, or exfiltration after execution.
34+
35+
## Common Benign Explanations
36+
- legitimate software downloads after SSO-based login
37+
- testing by developers or IT
38+
- approved installers launched after portal sign-in
39+
40+
## Escalate When
41+
Escalate if:
42+
- the redirect target is malicious or suspicious
43+
- payload execution follows quickly
44+
- obfuscated or download-heavy command lines appear
45+
- additional malicious behaviors are detected
46+
47+
## Suggested Response Actions
48+
- isolate the endpoint if execution occurred
49+
- collect the downloaded file and command line evidence
50+
- review all other recipients of the same message
51+
- block related URLs and payload indicators
52+
53+
## Analyst Notes
54+
This is a strong cross-domain detection because it correlates mail/browser activity with endpoint execution.
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# SharePoint or OneDrive Bulk Download by Newly Risky User
2+
3+
## Goal
4+
Identify high-volume SharePoint or OneDrive download activity performed by a user who recently showed risky sign-in behavior.
5+
6+
## Why This Alert Matters
7+
After account compromise, attackers often collect documents from SharePoint or OneDrive. Bulk download by a newly risky user can indicate cloud collection or exfiltration.
8+
9+
## What the Detection Is Looking For
10+
This detection looks for:
11+
- a recent risky or device-code-related successful sign-in
12+
- followed by high-volume SharePoint or OneDrive download activity
13+
- by the same user within a short time window
14+
15+
## Initial Triage Questions
16+
1. Was the sign-in suspicious or expected?
17+
2. Is the download volume normal for the user?
18+
3. What sites, folders, or files were involved?
19+
4. Did the user also access mail, create rules, or grant consent?
20+
21+
## Key Evidence To Review
22+
- risky sign-in timing and source
23+
- download count
24+
- site URLs
25+
- object IDs or file names
26+
- related mailbox, app, or forwarding activity
27+
28+
## Investigation Steps
29+
1. Review the risky sign-in and determine whether it was expected.
30+
2. Assess whether the download volume is unusual for the user.
31+
3. Identify which SharePoint or OneDrive locations were accessed.
32+
4. Determine whether the data appears sensitive or high-value.
33+
5. Check for related mail compromise, consent abuse, or public-sharing changes.
34+
35+
## Common Benign Explanations
36+
- planned migration or sync
37+
- legitimate bulk download by project or admin staff
38+
- new-device sync behavior
39+
40+
## Escalate When
41+
Escalate if:
42+
- the sign-in is suspicious and recent
43+
- the download volume is unusual
44+
- the sites contain sensitive documents
45+
- the user also shows mailbox or app abuse
46+
47+
## Suggested Response Actions
48+
- revoke sessions and review the account
49+
- preserve cloud access records
50+
- notify data owners for affected sites
51+
- investigate whether files were later shared or exported elsewhere
52+
53+
## Analyst Notes
54+
This is a strong cloud exfiltration signal when paired with risky sign-in or device code activity.
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
# Device Code Sign-In Followed by Device Registration
2+
3+
## Goal
4+
Identify suspicious device code authentication followed by registration of a new device, which may indicate attacker-controlled device enrollment and token abuse.
5+
6+
## Why This Alert Matters
7+
Device code phishing can allow an attacker to obtain access tokens without stealing a password directly. If that access is then used to register a new device, it may support long-term access, token persistence, or Privileged Refresh Token-related abuse.
8+
9+
## What the Detection Is Looking For
10+
This detection looks for:
11+
- a successful device code sign-in
12+
- followed within a short time window by a device registration event
13+
- for the same user
14+
15+
## Initial Triage Questions
16+
1. Did the user expect a device code login prompt?
17+
2. Is the newly registered device known and corporate-managed?
18+
3. Did the sign-in originate from an unusual IP, geography, or proxy?
19+
4. Was there follow-on access to email, files, Teams, or cloud apps?
20+
21+
## Key Evidence To Review
22+
- user UPN
23+
- device code sign-in time
24+
- source IP address
25+
- app used during the sign-in
26+
- registered device name and object ID
27+
- later sign-ins from the new device
28+
29+
## Investigation Steps
30+
1. Confirm the device code sign-in was successful and review its source context.
31+
2. Validate the newly registered device with the user, endpoint team, or asset inventory.
32+
3. Check whether the device is managed, compliant, and expected.
33+
4. Review follow-on cloud activity such as Graph, mailbox, OneDrive, SharePoint, or Teams access.
34+
5. Look for related OAuth consent, inbox rule creation, or risky sign-ins.
35+
36+
## Common Benign Explanations
37+
- approved device enrollment
38+
- legitimate user setup of a new corporate device
39+
- IT-guided onboarding workflows
40+
41+
## Escalate When
42+
Escalate if:
43+
- the user denies the device code login
44+
- the device is unknown or unmanaged
45+
- the source IP is suspicious
46+
- follow-on access to mail, files, or Teams occurs unexpectedly
47+
48+
## Suggested Response Actions
49+
- revoke active sessions and refresh tokens
50+
- disable or remove the suspicious device registration if confirmed malicious
51+
- require credential reset and MFA review
52+
- investigate nearby mailbox, file, and Teams activity
53+
54+
## Analyst Notes
55+
This is a high-priority identity alert because it can indicate an attacker moved from phishing to durable cloud access.
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
# OAuth Redirection Abuse URL Click
2+
3+
## Goal
4+
Identify phishing clicks involving suspicious Microsoft OAuth authorization links that may redirect users into attacker-controlled workflows.
5+
6+
## Why This Alert Matters
7+
OAuth authorization links can appear legitimate to users because they reference trusted Microsoft domains. Attackers abuse redirection parameters and prompt handling to deliver phishing, auth abuse, or malware.
8+
9+
## What the Detection Is Looking For
10+
This detection looks for:
11+
- URL clicks involving Microsoft OAuth authorize paths
12+
- suspicious parameters such as:
13+
- redirect URI manipulation
14+
- prompt suppression
15+
- unusual scope behavior
16+
17+
## Initial Triage Questions
18+
1. What message or lure caused the click?
19+
2. Did the link redirect to a non-Microsoft destination?
20+
3. Was the user prompted to sign in, authorize, or download something?
21+
4. Did the click lead to risky sign-ins or endpoint activity?
22+
23+
## Key Evidence To Review
24+
- full clicked URL
25+
- full URL chain
26+
- email subject and sender
27+
- recipient user
28+
- redirect target
29+
- nearby sign-ins or browser downloads
30+
31+
## Investigation Steps
32+
1. Review the clicked URL and its parameters.
33+
2. Trace the full redirect path to see where the user landed.
34+
3. Determine whether the email used a lure such as e-signature, secure message, voicemail, or collaboration invite.
35+
4. Check for device code sign-ins, risky logins, or follow-on endpoint execution.
36+
5. Determine whether the same URL was clicked by multiple users.
37+
38+
## Common Benign Explanations
39+
- rare developer testing
40+
- internal OAuth troubleshooting
41+
- benign application login workflows
42+
43+
## Escalate When
44+
Escalate if:
45+
- the redirect target is suspicious
46+
- the user entered credentials or approved access
47+
- endpoint execution followed the click
48+
- multiple users were targeted
49+
50+
## Suggested Response Actions
51+
- block the URL/domain if malicious
52+
- identify all recipients and clickers
53+
- review sign-ins and endpoint activity for impacted users
54+
- notify email security and IR teams
55+
56+
## Analyst Notes
57+
This is primarily a delivery-stage alert and should be correlated with identity and endpoint activity.
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Teams External Contact Followed by Quick Assist
2+
3+
## Goal
4+
Identify possible social-engineering chains in which external Teams contact or chat activity is followed by Quick Assist execution.
5+
6+
## Why This Alert Matters
7+
Attackers may use Teams to impersonate IT or support staff, then move the user into Quick Assist for hands-on-keyboard access. This pattern is especially relevant in Microsoft-centric environments.
8+
9+
## What the Detection Is Looking For
10+
This detection looks for:
11+
- external or guest Teams contact activity
12+
- followed within a short time window by Quick Assist usage
13+
- for the same user
14+
15+
## Initial Triage Questions
16+
1. Was the Teams contact internal, guest, or external?
17+
2. Did the external party claim to be support or IT?
18+
3. Did the user then accept a Quick Assist session?
19+
4. Were scripts, tools, or downloads launched afterward?
20+
21+
## Key Evidence To Review
22+
- Teams operation type
23+
- external/guest indicators
24+
- Quick Assist timing
25+
- user account and endpoint
26+
- follow-on endpoint activity
27+
28+
## Investigation Steps
29+
1. Review the Teams contact and whether it was external or guest-originated.
30+
2. Determine whether the user was coached into accepting remote help.
31+
3. Review Quick Assist execution on the endpoint.
32+
4. Check for PowerShell, batch, RMM, or download activity after the session started.
33+
5. Validate with the user what instructions they received.
34+
35+
## Common Benign Explanations
36+
- approved external collaboration
37+
- legitimate support interactions
38+
- vendor troubleshooting through federated Teams workflows
39+
40+
## Escalate When
41+
Escalate if:
42+
- the external contact is suspicious or unknown
43+
- the user was convinced to accept remote access
44+
- follow-on script or payload activity occurred
45+
- similar events affected multiple users
46+
47+
## Suggested Response Actions
48+
- notify messaging/collaboration admins
49+
- preserve Teams interaction evidence
50+
- isolate affected hosts if malicious activity followed
51+
- review external chat histories for wider targeting
52+
53+
## Analyst Notes
54+
This is best tuned in environments with frequent external Teams collaboration.

content/triage-guides/sentinel/persistence/inbox-rule-external-foward-after-suspicious-signin.md

Whitespace-only changes.
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# New App Secret Added Then Service Principal Sign-In
2+
3+
## Goal
4+
Identify cases where a new application credential is added and then the service principal signs in shortly afterward.
5+
6+
## Why This Alert Matters
7+
This can indicate rapid operationalization of a cloud application after credential creation. In a malicious scenario, an attacker adds a secret to a compromised or newly created app and immediately begins using it.
8+
9+
## What the Detection Is Looking For
10+
This detection looks for:
11+
- app secret or key credential creation
12+
- followed within hours by service principal sign-in
13+
- using the same application identity
14+
15+
## Initial Triage Questions
16+
1. Was the secret addition approved?
17+
2. Who added the credential?
18+
3. Was the app newly created or recently modified?
19+
4. What resources did the service principal access afterward?
20+
21+
## Key Evidence To Review
22+
- app name
23+
- app owner and initiator
24+
- secret addition event
25+
- service principal sign-in timing
26+
- target resources accessed after sign-in
27+
28+
## Investigation Steps
29+
1. Validate whether the app is known and managed.
30+
2. Review who added the secret or key credential.
31+
3. Check what the service principal accessed shortly after sign-in.
32+
4. Determine whether the app has broad or high-risk permissions.
33+
5. Correlate with suspicious consent, mailbox access, SharePoint activity, or unusual cloud administration.
34+
35+
## Common Benign Explanations
36+
- approved app onboarding
37+
- secret rotation
38+
- cloud engineering maintenance
39+
40+
## Escalate When
41+
Escalate if:
42+
- the app is unknown or newly created unexpectedly
43+
- the credential was added by an unexpected user
44+
- the service principal immediately accessed sensitive resources
45+
- the app overlaps with consent-phishing or mail/file access detections
46+
47+
## Suggested Response Actions
48+
- disable or restrict the app if malicious
49+
- remove the newly added credential
50+
- review service principal access scope
51+
- notify cloud identity owners and IR
52+
53+
## Analyst Notes
54+
This is one of the strongest cloud control-plane detections when timing is tight and access begins immediately.

0 commit comments

Comments
 (0)