Skip to content

anup1005/GPU-Accelerated-Facility-Reservation-System

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

GPU-Accelerated-Facility-Reservation-System

CUDA implementation (student file: cs22m017.cu) for a facility reservation problem.

Problem statement

Given a set of computer centres, each with multiple facilities and per-slot capacities, and a list of user requests (each requesting contiguous time slots on a specific facility), determine which requests can be granted subject to per-slot capacity limits. Requests are prioritized by centre, facility and request id as required by the assignment.

Why GPU acceleration matters

  • High request volumes: real systems can receive thousands to millions of reservation requests — evaluating and allocating requests in parallel reduces latency and improves throughput.
  • Batched work patterns: operations such as per-centre prefix sums, per-facility request processing and capacity updates expose parallelism that GPUs can exploit.
  • Scalability: moving hot inner loops (slot checks and capacity updates) to the GPU lets the CPU coordinate at higher level while the GPU performs many independent checks concurrently.

Files

  • cs22m017.cu — CUDA C++ source implementing the solution.
  • Assignment_4.pdf — assignment specification (reference).

Build & Run

Build with nvcc (NVIDIA CUDA compiler):

nvcc cs22m017.cu -o cs22m017

Run with the input file path:

./cs22m017 input.txt

Input format (short)

  • N : number of centres
  • For each centre: centre_id, facility_count, list of facility ids (facility_count items), list of capacities (facility_count items)
  • R : number of requests
  • Then R lines: req_id req_cen req_fac req_start req_slots

Notes & limitations

  • The implementation assumes 24 discrete time slots per facility.
  • req_start is treated as 1-based in input and converted to 0-based in code.
  • The program computes total successful/failed requests but the current source does not print them — add a print at the end of main to observe outcomes.
  • Some kernels are single-threaded per facility; further parallelisation can increase GPU utilization and speed.

About

CUDA-based parallel scheduling system that applies GPU acceleration, memory-efficient design, and concurrent slot validation to optimize large-scale reservation handling.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages