Skip to content

[fastAPI/Gunicorn]: psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "pg_type_typname_nsp_index" #138

@Matthieu-Tinycoaching

Description

@Matthieu-Tinycoaching

First Check

  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar issue and didn't find it.
  • I searched the SQLModel documentation, with the integrated search.
  • I already searched in Google "How to X in SQLModel" and didn't find any information.
  • I already read and followed all the tutorial in the docs and didn't find an answer.
  • I already checked if it is not related to SQLModel but to Pydantic.
  • I already checked if it is not related to SQLModel but to SQLAlchemy.

Commit to Help

  • I commit to help with one of those options 👆

Example Code

from fastapi import FastAPI
from sqlmodel import SQLModel, create_engine
from configparser import ConfigParser
from typing import Dict
import os
from typing import Optional, List, Dict, Any
from pydantic import BaseModel
from sqlmodel import Field, SQLModel, Column, JSON, DateTime
import datetime

class SemanticSearchDbPprod(SQLModel, table=True):
    __tablename__ = 'semantic_search_pprod'
    id: Optional[int] = Field(default=None, primary_key=True)
    id_user: int
    id_matrices: Optional[List[str]] = None
    date_time: datetime.datetime = Field(sa_column=Column(DateTime(timezone=True), nullable=False))
    query: str
    clean_query: str
    semantic_search_result: Optional[Dict[Any, Any]] = Field(default=None, sa_column=Column(JSON))
    error_code: int

## Function for setting connection information
def load_connection_info(
    ini_filename: str
) -> Dict[str, str]:
    parser = ConfigParser()
    parser.read(ini_filename)
    # Create a dictionary of the variables stored under the "postgresql" section of the .ini
    conn_info = {param[0]: param[1] for param in parser.items("postgresql")}
    # Adding a New Key–Value Pair of the variable stored under the "system" section of the .ini
    conn_info['host'] = parser.get('system', 'host', vars=os.environ)
    return conn_info

## Load connection info to "embeddings_sts_tf" database: host, database, user, password
conn_info_sts_embeddings = load_connection_info("./db.ini")

## Create SQLAlchemy engine to connect to postgreSQL Database
sqlEngine_sts_embeddings = create_engine('postgresql://{user}:{pw}@{host}:5432/{db}'.format(host=conn_info_sts_embeddings["host"], db=conn_info_sts_embeddings["database"], user=conn_info_sts_embeddings["user"], pw=conn_info_sts_embeddings["password"]), pool_recycle=3600, echo=False)

## Function for creating database and tables
def create_db_and_tables():
    SQLModel.metadata.create_all(sqlEngine_sts_embeddings)

app = FastAPI()

@app.on_event("startup")
def on_startup():
    create_db_and_tables()

Description

This script works very well when running this fastAPI app with uvicorn.

However, when trying to use gunicorn: https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker

I got the following error:

Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:

#! /usr/bin/env bash

# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head

[2021-10-18 14:04:24 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2021-10-18 14:04:24 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2021-10-18 14:04:24 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-10-18 14:04:24 +0000] [8] [INFO] Booting worker with pid: 8
[2021-10-18 14:04:25 +0000] [9] [INFO] Booting worker with pid: 9
[2021-10-18 14:04:25 +0000] [10] [INFO] Booting worker with pid: 10
[2021-10-18 14:04:25 +0000] [11] [INFO] Booting worker with pid: 11
[2021-10-18 14:04:25 +0000] [12] [INFO] Booting worker with pid: 12
[2021-10-18 14:04:25 +0000] [13] [INFO] Booting worker with pid: 13
[2021-10-18 14:04:25 +0000] [61] [INFO] Booting worker with pid: 61
[2021-10-18 14:04:25 +0000] [109] [INFO] Booting worker with pid: 109
[2021-10-18 14:04:25 +0000] [110] [INFO] Booting worker with pid: 110
[2021-10-18 14:04:25 +0000] [206] [INFO] Booting worker with pid: 206
[2021-10-18 14:04:25 +0000] [205] [INFO] Booting worker with pid: 205
[2021-10-18 14:04:25 +0000] [207] [INFO] Booting worker with pid: 207
[2021-10-18 14:04:25 +0000] [255] [INFO] Booting worker with pid: 255
[2021-10-18 14:04:25 +0000] [303] [INFO] Booting worker with pid: 303
[2021-10-18 14:04:25 +0000] [304] [INFO] Booting worker with pid: 304
[2021-10-18 14:04:25 +0000] [305] [INFO] Booting worker with pid: 305
[2021-10-18 14:04:25 +0000] [306] [INFO] Booting worker with pid: 306
[2021-10-18 14:04:25 +0000] [332] [INFO] Booting worker with pid: 332
[2021-10-18 14:04:25 +0000] [364] [INFO] Booting worker with pid: 364
[2021-10-18 14:04:25 +0000] [450] [INFO] Booting worker with pid: 450
[2021-10-18 14:04:25 +0000] [498] [INFO] Booting worker with pid: 498
[2021-10-18 14:04:25 +0000] [499] [INFO] Booting worker with pid: 499
[2021-10-18 14:04:26 +0000] [594] [INFO] Booting worker with pid: 594
[2021-10-18 14:04:26 +0000] [595] [INFO] Booting worker with pid: 595
[2021-10-18 14:04:26 +0000] [596] [INFO] Booting worker with pid: 596
{"loglevel": "info", "workers": 48, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 0, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
{"loglevel": "info", "workers": 48, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 0, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
2021-10-18 14:04:26,145 - INFO - Started server process [9]
2021-10-18 14:04:26,145 - INFO - Waiting for application startup.
2021-10-18 14:04:26,161 - INFO - Started server process [8]
2021-10-18 14:04:26,161 - INFO - Waiting for application startup.
[2021-10-18 14:04:26 +0000] [738] [INFO] Booting worker with pid: 738
[2021-10-18 14:04:26 +0000] [739] [INFO] Booting worker with pid: 739
{"loglevel": "info", "workers": 48, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 0, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
[2021-10-18 14:04:26 +0000] [740] [INFO] Booting worker with pid: 740
2021-10-18 14:04:26,260 - INFO - Started server process [10]
2021-10-18 14:04:26,260 - INFO - Waiting for application startup.
[2021-10-18 14:04:26 +0000] [788] [INFO] Booting worker with pid: 788
[2021-10-18 14:04:26 +0000] [789] [INFO] Booting worker with pid: 789
{"loglevel": "info", "workers": 48, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 0, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
2021-10-18 14:04:26,336 - INFO - Started server process [11]
2021-10-18 14:04:26,337 - INFO - Waiting for application startup.
[2021-10-18 14:04:26 +0000] [837] [INFO] Booting worker with pid: 837
[2021-10-18 14:04:26 +0000] [838] [INFO] Booting worker with pid: 838
[2021-10-18 14:04:26 +0000] [886] [INFO] Booting worker with pid: 886
{"loglevel": "info", "workers": 48, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 0, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
2021-10-18 14:04:26,503 - INFO - Started server process [13]
2021-10-18 14:04:26,504 - INFO - Waiting for application startup.
[2021-10-18 14:04:26 +0000] [981] [INFO] Booting worker with pid: 981
[2021-10-18 14:04:26 +0000] [1076] [INFO] Booting worker with pid: 1076
{"loglevel": "info", "workers": 48, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 0, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
2021-10-18 14:04:26,546 - INFO - Application startup complete.
2021-10-18 14:04:26,562 - ERROR - Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1799, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
    cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "pg_type_typname_nsp_index"
DETAIL:  Key (typname, typnamespace)=(semantic_search_pprod_id_seq, 2200) already exists.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 540, in lifespan
    async for item in self.lifespan_context(app):
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 481, in default_lifespan
    await self.startup()
  File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 518, in startup
    handler()
  File "/app/main.py", line 30, in on_startup
    create_db_and_tables()
  File "/app/database.py", line 20, in create_db_and_tables
    SQLModel.metadata.create_all(sqlEngine_sts_embeddings)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 4746, in create_all
    bind._run_ddl_visitor(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/future/engine.py", line 342, in _run_ddl_visitor
    conn._run_ddl_visitor(visitorcallable, element, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2110, in _run_ddl_visitor
    visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 524, in traverse_single
    return meth(obj, **kw)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 846, in visit_metadata
    self.traverse_single(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/visitors.py", line 524, in traverse_single
    return meth(obj, **kw)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 890, in visit_table
    self.connection.execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/future/engine.py", line 280, in execute
    return self._execute_20(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1611, in _execute_20
    return meth(self, args_10style, kwargs_10style, execution_options)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
    return connection._execute_ddl(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1378, in _execute_ddl
    ret = self._execute_context(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1842, in _execute_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2023, in _handle_dbapi_exception
    util.raise_(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
    raise exception
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1799, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "pg_type_typname_nsp_index"
DETAIL:  Key (typname, typnamespace)=(semantic_search_pprod_id_seq, 2200) already exists.

[SQL: 
CREATE TABLE semantic_search_pprod (
	date_time TIMESTAMP WITH TIME ZONE NOT NULL, 
	semantic_search_result JSON, 
	id SERIAL, 
	id_user INTEGER NOT NULL, 
	id_matrices VARCHAR, 
	query VARCHAR NOT NULL, 
	clean_query VARCHAR NOT NULL, 
	error_code INTEGER NOT NULL, 
	PRIMARY KEY (id)
)

]
(Background on this error at: https://sqlalche.me/e/14/gkpj)

2021-10-18 14:04:26,562 - ERROR - Application startup failed. Exiting.

It seems that the function to create database and tables on startup tries it for each CPU core leading to already existing database/tables. How is it possible to only rnu this function once with gunicorn and avoid this error?

Thanks!

Operating System

Linux

Operating System Details

Ubuntu 18.04 LTS

SQLModel Version

0.0.4

Python Version

3.8.8

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions