Skip to content

Latest commit

 

History

History
333 lines (244 loc) · 11.4 KB

File metadata and controls

333 lines (244 loc) · 11.4 KB

GDI

REMS

Base repo: link

Change config.edn to match the OIDC credentials and the public URL. Then, create the DB and populate it appropriately:

docker compose up -d db
docker compose run --rm -e CMD="migrate" app
docker compose up -d app

After that, access REMS using the web browser (default is localhost:3000), log into the account that is supposed to be the admin (e.g., the account of the person doing these steps) and go to /profile. At this endpoint, you can see your username, that should look something like "123456abdecf@lifescience-ri.eu", copy it to the clipboard and save it in a variable in terminal:

export REMS_OWNER="<YOUR USERNAME>"

Grant it owner role:

docker compose exec app java -Drems.config=/rems/config/config.edn -jar rems.jar grant-role owner $REMS_OWNER

Restart the app to load the account as owner:

docker compose restart app

Create API credentials

Important: In this section we will create credentials that have different roles and are shared with different entities. Please always choose strong API keys and never use the same API key for different users.

It should be noted that this entire section is taken from the ADELE deployment, so if you want to use this functionalities you'll need to download the appropriate scripts from this repo:

Also, you should save the credentials (user ID + API key) somewhere, so you don't lose them. If you don't, you can either generate new ones or use make psql and check the tables users and api_key.

You can use make api-key to generate a random API key, in case it is helpful.

Also, instead of setting up the credentials in the shell, you can use the template .env file:

cp credentials/credentials.sh.example credentials/credentials.sh
vim credentials/credentials.sh
source credentials/credentials.sh
  • Setup the relevant credentials:
export REMS_OWNER="<USERNAME OF THE OWNER>"
export OWNER_KEY="<API KEY TO BE USED BY THE OWNER>"

Create owner api key (for creating other users/api keys)

bash scripts/create_owner_api_key.sh

Create reporter credentials (for external services to retrieve permission data)

  • Setup the reporter credentials:
export REPORTER_KEY="<NEW API KEY FOR REPORTER USER>"
export REPORTER_USER="<REPORTER USER ID> # optional
  • Run the following command:
bash scripts/create_reporter_user.sh

If you want to test if the credentials for the reporter user are right, do make try-reporter-creds while having the env variables setup. If the command returns "forbidden" or "unauthorized", there is something wrong and you must fix it before sending them to the LS AAI people.

Create organization owner credentials (for the TRE server to use)

  • Setup the organization owner credentials. Once again, you can put whatever values you want, but choose a strong API KEY (and different from the reporter's!):
export ORG_OWNER_KEY="<NEW API KEY FOR ORG OWNER>"
export ORG_OWNER_USER="<ORG OWNER USER ID>" # optional
  • Run the following command:
bash scripts/create_org_owner_user.sh

How to change credentials

You might want to change your credentials later, specially when going from test to production. If you do, here's what you need to change:

  • OIDC: Change config.edn
  • DB: Change config.edn (database URL) AND docker-compose.yml (DB part)
  • User API keys: Directly on the DB, use make psql and look for the api_key and user tables.

Funnel

Base repo: link

  • Modify the config.yaml file to match this one:
Database: boltdb
Compute: local

EventWriters: 
  - boltdb
  - log

Server:
  # Hostname of the Funnel server.
  HostName: localhost

  # Port used for HTTP communication and the web dashboard.
  HTTPPort: 8010

  # Port used for RPC communication.
  RPCPort: 9090

RPCClient:
  # RPC server address 
  ServerAddress: localhost:9090

  # Credentials for Basic authentication for the server APIs using a password.
  # If used, make sure to properly restrict access to the config file
  # (e.g. chmod 600 funnel.config.yml)
  # User: funnel
  # Password: abc123

  # connection timeout.
  Timeout: 60s

  # The maximum number of times that a request will be retried for failures.
  # Time between retries follows an exponential backoff starting at 5 seconds
  # up to 1 minute
  MaxRetries: 10


Worker:
  # Files created during processing will be written in this directory.
  WorkDir: /data/funnel-work-dir
 
BoltDB:
  # Path to the database file
  Path: /data/funnel-work-dir/funnel.db

LocalStorage:
  # Whitelist of local directory paths which Funnel is allowed to access.
  AllowedDirs:
    - ./

GenericS3: # CHANGE ME. These are S3 credentials for an external server
  - Disabled: false
    Endpoint: "http://s3:9000"
    Key: "access"
    Secret: "secretkey"

# Amazon S3 is enabled by default, so we need to set it up to disable it
# and not conflict with our S3
AmazonS3:
  Disabled: true
  • Run docker compose up -d

Storage and Interfaces

Base repo: link

  • Change auth and OIDC redirect URLs in docker-compose.yml, you only need to change the domains and (possibly) the scheme.
  • Run cp .env.example .env
  • Change S3 and OIDC credentials in .env
  • Go to config/config.yaml and change auth.s3Inbox to match your inbox domain.
  • Change the list of trusted Visa Issuers (config/iss.json) to match the LS AAI's or your Mock AAI's JWK URL, depending on which one you pretend to use to log in. You also have to do the same for the server that issues the ControlledAcessGrant Visas, which is typically a REMS (local or central). Here, you have an example of a iss.json file:
[
    {
        "iss": "https://oidc:8080/",
        "jku": "https://oidc:8080/jwk"
    },
    {
        "iss": "http://aai-mock:8080/oidc/",
        "jku": "http://aai-mock:8080/oidc/jwk"
    },
    {
        "iss": "https://rems.gdi.biodata.pt/",
        "jku": "https://rems.gdi.biodata.pt/api/jwk"
    },
    {
        "iss": "https://login.aai.lifescience-ri.eu/oidc/",
        "jku": "https://login.aai.lifescience-ri.eu/oidc/jwk"
    },
    {
	"iss": "https://daam.portal.dev.gdi.lu/",
	"jku": "https://daam.portal.dev.gdi.lu/api/jwk"
    }
]
  • Run docker compose up -d

Extra

  • Using external DB or S3 means you also have to modify config/config.yaml accordingly.

Beacon

Base repo: link

  • Modify beacon/conf/conf.py to match your instance's metadata. The most important fields are:
    • beacon_id, that needs to be the domain of the instance reversed (e.g., pt.biodata.beacon)
    • uri, which is the public URL of Beacon
    • beacon_name, that is solely used to identify your Beacon in the Beacon Network (BN) UI
  • You might want to comment this line in beacon/connections/mongo/mongo-init/init.js if you plan to have more than a few variants and are worried about the size occupied on the disk: db.genomicVariations.createIndex([('$**', 'text')]);
  • To manage access to the data, you have to configure the permission file (beacon/permissions/datasets/datasets_permissions.yml) like the following:
COVID_pop12_ita_1: # ID of the dataset
  public:
    default_entry_types_granularity: record
  • Run docker compose up -d

Loading Data

  • This changes every now and then, so you need to look for the most recent guide. Generally, you have to put your VCF and xlsx/csv files into a container like ri-tools, that has scripts to convert genomic and phenotypic data to BFF and load it to the Beacon DB.

FDP

  • Create a file with name application.yml, fill it with the following configuration, and change the relevant values:
instance:

  clientUrl: https://fdp.gdi.biodata.pt # TODO: CHANGEME

  persistentUrl: https://fdp.gdi.biodata.pt # TODO: CHANGEME

repository:

    type: 5

    blazegraph:

        url: http://blazegraph:8080/blazegraph

  • Do the same with a docker-compose.yml file using the following configuration:

 services:

     fdp:

         image: fairdata/fairdatapoint:1.17.4

         volumes:

             - ./application.yml:/fdp/application.yml:ro

     fdp-client:

         image: fairdata/fairdatapoint-client:1.17.1

         ports:

             - 8666:80

         environment:

             - FDP_HOST=fdp

     mongo:

         image: mongo:4.0.12

         volumes:

             - fdp-data:/data/db

     blazegraph:

         image: metaphacts/blazegraph-basic:2.2.0-20160908.003514-6

         volumes:

             - ./blazegraph:/blazegraph-data

 volumes:

   fdp-data:
  • Run docker compose up -d

Change credentials (important!)

  • After starting up, you have to:
  • Go to your FDP webpage (default localhost:8667)
  • Log in using the default credentials (albert.einstein@example.com:password)
  • Click on right top corner, go to Users
  • Remove "Nikola Tesla"
  • Click "Edit profile" on Albert Einstein and change the password to something safe

LS AAI

Creating a service

In order to be able to use LS AAI to log in on our products (Beacon, S&I, etc.), we need to create an LS AAI service (ideally, 1 per product). To register a service, you need to visit this link, go to "New Service" and fill the required fields. For reference, below you'll find a list of the common fields among all the services in GDI.

Common fields

  • Authentication protocol: OIDC
  • Login URL: The URL of the product (e.g., https://rems.gdi.biodata.pt)
  • Flows the service will use: authorization code
  • Token endpoint authentication type: client_secret_basic
  • PKCE type: SHA256 code challenge
  • Service will call introspection endpoint: I don't think this is required, but I suggest to leave it on
  • Issue refresh tokens for this client: No
  • Step 4 of the registration: No need to check or fill anything

Product-specific fields

Beacon

At the time of creation of this document, a Beacon instance using the production implementation is not supposed to support login, so there is no need to take care of creating a new service if you just plan to use it through a Beacon Network. However, there is a BioData.pt implementation that has a login functionality, and if you are using it, you'll need to create a service with the following values:

  • Not sure, but I think the instrospection endpoint is compulsory for Beacon
  • scopes: openid, email, profile, country, ga4gh_passport_v1
  • callback uri: base url + /oidc-callback (e.g., https://beacon.gdi.biodata.pt/oidc-callback)
S&I
REMS

Register as Bonafide researcher

One might need the Bonafide/Registered researcher GA4GH Visa in LS AAI. To do that, just visit this link. It might be needed to also