Skip to content

evanhfw/ml-services

Repository files navigation

HireOn ML Services

A FastAPI-based microservice providing AI and recommendation services for HireOn recruitment platform.

Overview

This repository contains the machine learning services for the HireOn platform, including:

  1. Gen AI Services: CV analysis, job matching, and cover letter generation using Gemini AI.
  2. Recommendation Engine: Job recommendation system using semantic search and embeddings.

Features

Gen AI Services

  • CV Job Analysis: Analyzes a candidate's CV against job details to:

    • Calculate a relevance score between the CV and job
    • Identify skill matches and gaps
    • Provide personalized improvement suggestions
    • Highlight strengths and areas for development
  • Cover Letter Generator: Automatically generates personalized cover letters based on:

    • Candidate's CV content
    • Job description and requirements
    • Outputs as a professionally formatted PDF

Recommendation Engine

  • Job Recommendations: Provides personalized job recommendations based on:

    • Semantic understanding of CV content
    • Job title and description matching
    • Similarity scoring and ranking
  • CV Embeddings: Stores and retrieves vector embeddings of user CVs for efficient similarity searching

Tech Stack

  • FastAPI: Modern, high-performance web framework for building APIs
  • Gemini AI: Google's generative AI model for natural language processing tasks
  • ChromaDB: Vector database for storing and querying embeddings
  • Google Cloud Storage: For storing documents and generated PDFs
  • Docker: Containerization for deployment
  • Python 3.12: Core programming language

Project Structure

.
├── app/                        # Main application package
│   ├── api/                    # API routes and models
│   │   ├── core/               # Core service components
│   │   ├── models/             # Data models and schemas
│   │   └── routes/             # API endpoints
│   └── utils/                  # Utility functions
├── credentials/                # Service account credentials
├── data/                       # Data files
├── experiments/                # Experimental notebooks and scripts
├── notebook/                   # Jupyter notebooks for development
├── .env                        # Environment variables
├── .env.example                # Example environment configuration
├── Dockerfile                  # Docker configuration
├── docker-compose.yml          # Docker Compose configuration
├── main.py                     # Application entry point
└── requirements.txt            # Python dependencies

Installation

Prerequisites

  • Python 3.12+
  • Docker and Docker Compose (optional)
  • Google Cloud account with Gemini API and Storage access

Environment Setup

  1. Clone the repository:

    git clone https://github.com/yourusername/hireon-ml-services.git
    cd hireon-ml-services
  2. Set up a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Configure environment variables:

    cp .env.example .env
    # Edit .env file with your API keys and configuration
  5. Add your Google Cloud credentials to the credentials/ directory.

Running the Service

Local Development

uvicorn main:app --reload

The API will be available at http://localhost:8000

Using Docker

docker build -t hireon-ml-services .
docker run -p 8000:8000 hireon-ml-services

Or with Docker Compose:

docker-compose up

API Documentation

When the service is running, API documentation is available at:

Environment Variables

Variable Description
GEMINI_API_KEY Google Gemini API key
GOOGLE_APPLICATION_CREDENTIALS Path to Google Cloud service account credentials
GOOGLE_CLOUD_PROJECT Google Cloud project ID
GOOGLE_CLOUD_LOCATION Google Cloud region
GOOGLE_GENAI_USE_VERTEXAI Whether to use Vertex AI (true/false)
MONGO_URI MongoDB connection string
CHROMA_CLIENT_HOST ChromaDB server host
CHROMA_CLIENT_PORT ChromaDB server port

Development

Adding New Endpoints

  1. Create a new route file in app/api/routes/ or extend existing ones
  2. Define request/response models in app/api/models/models.py
  3. Implement utility functions in app/utils/
  4. Register the router in app/__init__.py

License

[License Information]

Contributors

[Contributor Information]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •