- Overview
- Documentation
- Installation TLDR
- First-Run TLDR
- Unauthenticated Run TLDR
- OpenGraph TLDR
- Module/Data Output TLDR
- Scripts Folder TLDR
- Dependency Inventory
- Repository Layout
- Who Is This For?
- Author, Contributors, and License
- Resources
- Credits
In the spirit of transparency: parts of this project and documentation were developed with LLM coding assistance. Review code and behavior in your environment before operational use. Ideally the dependency summary at the end of the README and explanations throughout should be enough to meet and verify your operational needs.
GCPwn (gee-see-pwn) is a Google Cloud offensive security assessment framework built for workspace-driven credential handling, service enumeration, artifact collection, and graph-based attack-path analysis.
It is designed as a one-stop shop for three primary workflows:
- Reconnaissance and Enumeration: Use success/fail API behavior trackedin the background, explicit
testIamPermissionscalls, and IAM binding analysis to understand effective permissions from clear-box (probably a config audit) to opaque scenarios (finding creds during a pentest). Export data in JSON/CSV/Excel formats, download artifacts as they are found (for example, Artifact Registry Python packages), and run broad discovery withenum_alland download data throughout with the--downloadflag. - Exploitation: Execute pre-packaged exploit workflows for blue-team validation and professional penetration-testing exercises.
- Graphing and OpenGraph: Convert collected data into OpenGraph output for BloodHound-style analysis (see below). By default, graphing focuses on selected privilege-escalation edges and can be expanded with more verbose output, inheritance evaluation, and multi-permission edge logic.
Disclaimer: Use this tool only in systems, projects, and environments you own or are explicitly authorized to assess. Unauthorized use may violate law, policy, or terms of service.
Documentation is maintained in the GitHub Wiki:
Additional project docs:
- Contributing:
CONTRIBUTING.md - Roadmap:
ROADMAP.md - License:
LICENSE
The installation strategy is to keep non-google dependencies minimal hopefully making it easier for you to get the tool approved if needed. xlsxwriter and prettytable are optional and can be installed only if you want those extra features, as shown below.
git clone https://github.com/NetSPI/gcpwn.git
cd gcpwn
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pipBase install (no optional table/excel dependencies):
pip install -r requirements.txtInstall optional table output support:
pip install prettytable==3.17.0Install optional Excel export support:
pip install xlsxwriter==3.2.9Run the tool:
python -m gcpwnpip3 install gcpwnIf you want optional table rendering (table std output option in configs) and/or Excel export support (data export excel option):
pip3 install "gcpwn[table]"
pip3 install "gcpwn[excel]"
# both extras
pip3 install "gcpwn[table,excel]"Run the tool:
gcpwnIf your shell cannot find gcpwn, run:
python -m gcpwnDownload a release binary from GitHub Releases:
Use the binary asset that aligns with your operating system and CPU architecture (for example, Linux/macOS/Windows and amd64 vs arm64).
Example (Linux/macOS):
chmod +x ./gcpwn
./gcpwndocker build -t gcpwn .
docker run --rm -it gcpwnBuild with optional extras (if you want table rendering and/or Excel export available in the container):
# prettytable extra
docker build --build-arg GCPWN_EXTRAS=table -t gcpwn .
# xlsxwriter extra
docker build --build-arg GCPWN_EXTRAS=excel -t gcpwn .
# both extras
docker build --build-arg GCPWN_EXTRAS=table,excel -t gcpwn .If you want local persistence for DB/output between runs, mount volumes:
docker run --rm -it \
-v "$(pwd)/databases:/opt/gcpwn/databases" \
-v "$(pwd)/gcpwn_output:/opt/gcpwn/gcpwn_output" \
gcpwn- Create/select a workspace by starting the program using one of the commands in the Installation section above.
- Load credentials (user/service/OAuth token). If you are using
gcloud, you may need to rungcloud config set project <PROJECT_ID>when loading ADC-style credentials. - Start with broad enumeration, ideally with ONE of the options below:
# Minimal first pass: enumerate discovered resources only (no testIamPermissions or download calls).
modules run enum_all
# Common first pass: run testIamPermissions checks on supported resources.
# Also runs a condensed list of permissions for org/folder/project resources.
modules run enum_all --iam
# Common first pass + downloads: run testIamPermissions and attempt content downloads where supported.
modules run enum_all --iam --download
# In-depth pass: --all-permissions includes large org/folder/project permission sets (10,000+ perms, executed in batches). Can take some time.
# See: gcpwn/modules/resourcemanager/utilities/data/all_*_permissions.txt for the full list or to customize it.
modules run enum_all --iam --all-permissions
# In-depth pass + downloads: enable artifact/content downloads where supported.
# Use `modules run enum_all -h` for token options.
# Example token: cloudrun_revision_env
modules run enum_all --iam --all-permissions --download- Review what was collected:
# Downloaded artifacts are written under gcpwn_output/ by default.
# Export collected data.
# CSV/JSON work in base install; Excel requires the optional Excel dependency from the installation section.
data export csv
data export json
data export excel
# Review current credential permissions discovered via testIamPermissions.
# Use --csv to export full row-level permission data (to avoid truncation in terminal output).
creds info
creds info --csv
# Process enumerated IAM bindings and build IAM summaries.
modules run process_iam_bindings
# Build BloodHound-compatible graph JSON.
# Import output.json into BloodHound CE:
# https://bloodhound.specterops.io/get-started/quickstart/community-edition-quickstart
modules run enum_gcp_cloud_hound_data --expand-inheritance --reset --out output.jsonSometimes you may want to run unauthenticated or quick modules without starting a full interactive session. You can run unauthenticated modules directly without entering the interactive workspace shell. This implicitly creates a workspace called PASSTHROUGH.
Examples:
# Run via installed console script
gcpwn --module unauth_apikey_enum_all_scopes --api-key AIza...
# Same flow via python module entrypoint
python -m gcpwn --module unauth_apikey_gemini_exploit --api-key AIza...By default, the OpenGraph module only graphs edges and related resource edges tied to privilege escalation paths. The default OpenGraph escalation-rule allowlist lives in gcpwn/mappings/og_privilege_escalation_paths.json. Review the wiki for explanations of the available flags, but the best option is usually the following:
modules run enum_gcp_cloud_hound_data --expand-inheritance --reset --out Bloodhound_Output.jsonYou might notice edges go to role@location instead of going directly to the project. This preserves authorization fidelity in the graph. If User A has compute.admin on Project A and User B has storage.admin on Project A, drawing both users directly to Project A and then Project A to all resources would incorrectly imply both users can reach the same resources when User A can only get to compute and User B can only get to storage. The correct model is to route each user through their specific role binding node at that location, and only then fan out to resources that role can actually affect.
Incorrect method (over-broad reach):
User A --> Project A --> Compute & Storage
User B --> Project A --> Compute & Storage
Correct method (binding-scoped reach):
User A --> compute_admin@project:A --> Compute Resources in Project A
User B --> storage_admin@project:A --> Storage Resources in Project A
Generate OpenGraph JSON:
modules run enum_gcp_cloud_hound_data --out opengraph_output.json --reset [--include-all] [--expand-inherited] [--cond-eval]
# Example
(<staging-project-2>:ABC)> modules run enum_gcp_cloud_hound_data --expand-inherited --reset --out my_output.json
[*] Step 1: users_groups (Users/Groups graph)
[*] Completed users_groups: +92 nodes, +0 edges
[*] Step 2: iam_bindings (IAM bindings graph)
[*] Completed iam_bindings: +109 nodes, +201 edges
[*] Step 3: inferred_permissions (Inferred permissions graph)
[*] Completed inferred_permissions: +2 nodes, +2 edges
[*] Step 4: resource_expansion (Resource expansion graph)
[*] Completed resource_expansion: +63 nodes, +62 edges
[*] Pruned isolated service-account IAM-binding islands (pairs=17, key_islands=5, nodes=50, edges=28).
[*] Pruned orphan implied-IAM-binding nodes (implied_bindings=2, nodes=2, edges=2).
[*] Pruned isolated service-account nodes (service_accounts=43, nodes=43, edges=0).
[*] OpenGraph generation complete. Nodes: 171 | Edges: 235
[*] Saved graph JSON to my_output.json
# Pass the output JSON into your local installation of BloodHound
> head TEST.json -n 20
{
"metadata": {
"source_kind": "GCPBase"
},
"graph": {
"nodes": [
{
"id": "allUsers",
"kinds": [
"GCPAllUsers",
"GCPPrincipal"
],
"properties": {
"display_name": "allUsers",
"source": "iam_members"
}
},
{
"id": "combo_iambinding:RESET_COMPUTE_STARTUP_SA@project:<Project_ID>#06e0003fe1",
"kinds": [
[TRUNCATED]
Optional flags:
--include-all: include broader relationship output that might not be a direct privilege-escalation path (for example, a binding that exists but is not a direct avenue to escalate privileges).--expand-inherited: expand inherited IAM scope relationships.--cond-eval: currently preserves conditional workflow plumbing (placeholder behavior).--reset: clear prior OpenGraph DB state before generation.
Then import the JSON into BloodHound CE.
If you want to add your own privilege-escalation edge (or any edge) to be called out by default, edit og_privilege_escalation_paths.json and add your rule. You need to know which permissions you want to flag. We cover adding a single-permission edge below, and the wiki covers multi-permission edge rules.
Let's assume we want to call out cloudkms.cryptoKeys.update and add it to our default single-permission rules.
- Add to the permission --> role dictionary
- If your target permission (i.e.
cloudkms.cryptoKeys.update) is not already included, add the permission on a newline toscripts/build_predfined_perm_to_role_input.txt - With your own GCP creds (for example, a free GCP account) in your own private GCP environment, run
./build_predefined_perm_to_roles.sh build_predfined_perm_to_role_input.txt > perm_to_role_mappings.jsonas an authenticated user. This bash script gets all permissions for all predefined roles in a GCP environment to show which roles map to your target permission. You can also add the mapping manually togcpwn/data/core/mappings/og_permission_to_roles_map.jsonusing https://docs.cloud.google.com/iam/docs/roles-permissions - You should see the permission --> role(s) mapping in
perm_to_role_mappings.json. Replacegcpwn/data/core/mappings/og_permission_to_roles_map.jsonwith content ofperm_to_role_mappings.json
- If your target permission (i.e.
- Add a rule definition to
og_privilege_escalation_paths.json(Note multi-permission rules are covered in the wiki). In our case, it might look like the entry below. Noteresource_scopes_possibleis where one might see a binding with those permissions, andresource_typesare the actual resource nodes you will be drawing edges to. For example, you might seecloudkms.cryptoKeys.updateattached to a project IAM binding or attached directly to a key IAM binding, but the final node in either case will be a key node and NOT a project per the reasoning stated above. Ifcloudkms.cryptoKeys.updateis attached to a project IAm binding, gcpwn will fan out edges to key nodes discovered in that project rather than end at a project node.
"single_permission_rules": {
"CAN_DISABLE_KMS_KEY": {
"permission": "cloudkms.cryptoKeys.update",
"description": "Can update KMS crypto key settings including disabling or changing key behavior.",
"resource_scopes_possible": ["project", "kmscryptokey"],
"target_selector": {
"mode": "resource_types",
"resource_types": ["kmscryptokey"]
}
}
}- A final OpenGraph edge might then look like the following when ingested in BloodHound
user:alice@example.com
-[HAS_IAM_BINDING]->
iambinding:roles/cloudkms.admin@project:my-project
-[CAN_DISABLE_KMS_KEY]->
resource:projects/my-project/locations/us-central1/keyRings/prod/cryptoKeys/app-key
These examples assume your OpenGraph JSON has already been imported into Neo4j/BloodHound-compatible tooling. Remove/alter LIMIT line as needed.
- See all nodes and edges
MATCH (n)-[r]->(m)
RETURN n, r, m
LIMIT 1000- See all nodes and edges minus service-agent-associated data
MATCH (n)-[r]->(m)
WHERE coalesce(n.is_service_agent, false) = false
AND coalesce(m.is_service_agent, false) = false
AND coalesce(n.service_agent_role, false) = false
AND coalesce(m.service_agent_role, false) = false
RETURN n, r, m
LIMIT 1000- See all nodes and edges where IAM edges are inferred only
MATCH (p)-[:HAS_IMPLIED_PERMISSIONS]->(g)-[r]->(t)
WHERE type(r) STARTS WITH "INFERRED_"
RETURN p, g, r, t
LIMIT 1000- See all nodes and edges where IAM edges are binding-based only
MATCH (p)-[seed:HAS_IAM_BINDING|HAS_COMBO_BINDING]->(g)
OPTIONAL MATCH (g)-[r]->(t)
WHERE r IS NULL OR NOT type(r) STARTS WITH "INFERRED_"
RETURN p, seed, g, r, t
LIMIT 1000- Find paths to
roles/owneror any custom role (replaceABC_Name)
MATCH p=(principal)-[:HAS_IAM_BINDING]->(binding:GCPIamSimpleBinding)
WHERE binding.role_name IN ["roles/owner", "ABC_Name"]
OPTIONAL MATCH (binding)-[r]->(target)
RETURN principal, binding, r, target, p
LIMIT 1000- Identify paths where a service account leads to another service account
MATCH p=(sa1:GCPServiceAccount)-[*1..6]->(sa2)
WHERE (sa2:GCPServiceAccount OR sa2:GCPServiceAccountResource)
AND sa1 <> sa2
RETURN p
LIMIT 500Default output is text. You can switch workspace output format with:
configs list
configs set std_output_format text
configs set std_output_format table
table mode requires the optional dependency prettytable covered in the installation section above.
# Export all collected service data to one CSV blob
data export csv
# Export all collected service data to one JSON blob
data export json
# Export all collected service data to one Excel workbook
data export excel
# Export all collected service data to a specific Excel file path
data export excel --out-file ./gcpwn_export.xlsx
# Export hierarchy image (SVG)
data export treeimage
# Run direct SQL against SQLite (service DB by default)
data sql --db service "SELECT * FROM iam_allow_policies LIMIT 25"
# Wipe service DB rows for current workspace (destructive)
data wipe-service --yes
Scripts under scripts/ are included in this GitHub repository to support setup, customization, and development workflows.
They are not required for normal tool usage and are not part of the standard runtime path for the installed package.
Use them when you want to modify behavior, regenerate mapping data, or follow advanced project workflows.
For context, review the wiki and the OpenGraph instructions for adding an edge in this README.
Direct runtime dependencies are sourced from requirements.txt (and loaded via pyproject.toml).
boto3>=1.43.1,<2(includesbotocoretransitively)pandas==3.0.2requests==2.33.1
google-api-core==2.30.3google-api-python-client==2.195.0google-auth-httplib2==0.3.1google-auth-oauthlib==1.3.1
google-cloud-*packages are pinned inrequirements.txt(for example:google-cloud-compute,google-cloud-storage,google-cloud-resource-manager,google-cloud-container, etc.).
google-genai==1.74.0
prettytable==3.17.0viapip install "gcpwn[table]"xlsxwriter==3.2.9viapip install "gcpwn[excel]"
pytest>=9.0viapip install "gcpwn[dev]"
pyinstaller==6.20.0is used by.github/workflows/build_release.ymlto package standalone executables for release artifacts.- It is not required for normal runtime usage of GCPwn.
Tip: If you want an SBOM from GitHub, open this repository and go to Insights -> Dependency graph, then use Export SBOM.
gcpwn/: main package root.gcpwn/__main__.py:python -m gcpwnentrypoint.gcpwn/cli/: command processor and workspace command handlers.gcpwn/core/: session/config/db/runtime/export primitives.gcpwn/modules/: service modules (everything,opengraph, service-specific modules).gcpwn/mappings/: static mapping/config data used across modules.tests/: unit/integration/module tests.databases/: SQLite stores for workspaces, sessions, and service data.
- Pentesters: automate large portions of GCP recon and exploit-path discovery.
- Cloud security learners: quickly map APIs/resources and permission behavior.
- Security researchers: batch module execution + centralized data/action collection for deeper analysis/proxying.
- Author: NetSPI
- License: BSD-3-Clause (
LICENSE) - Contributors: PRs and issues welcome
Tool has changed in several ways and new videos are coming. For now, the following should provide a good resource:
- fwd:cloudsec 2024: https://www.youtube.com/watch?v=opvv9h3Qe0s
- DEF CON 32 Cloud Village: https://www.youtube.com/watch?v=rxXyYo1n9cw
- Introduction blog: https://www.netspi.com/blog/technical-blog/cloud-pentesting/introduction-to-gcpwn-part-1/
Built on the shoulders of giants; inspiration, code, and/or supporting research included from:
- GMap API Scanner: https://github.com/ozguralp/gmapsapiscanner
- Rhino Security: https://rhinosecuritylabs.com/gcp/privilege-escalation-google-cloud-platform-part-1/
- GCPBucketBrute: https://github.com/RhinoSecurityLabs/GCPBucketBrute
- Google Cloud Python docs: https://cloud.google.com/python/docs/reference

