-
Pre-requisite
- Install asdf
- Install PostgreSQL Server
- Fork and clone branch
masterof this Mono repo. - Register application on Unsplash which is used as an Image Catalog.
-
Create a PostgreSQL database and replace the configurated values in
core/config/dev.exswith your own:# Configure your database config :core, Core.Repo, username: "postgres", password: "postgres", database: "self_dev",
-
Install tools:
$ asdf plugin add erlang $ asdf plugin add elixir $ asdf plugin add nodejs $ asdf install
See
.tool-versionsfor the exact versions that will be installed -
Setup:
$ cd core $ mix setup -
Build Core (from core folder):
$ BUNDLE=self mix compile
-
Run Core locally (from core folder):
$ BUNDLE=self mix phx.server
-
Go to browser
The Core app is running at: http://localhost:4000
- Replace self.svg and self_wide.svg with your icons of choice.
- Change footer.ex to format the platform footer or remove it completely
- In items.ex you will find all the menu items. Add custom items when required.
Core supports the following page layouts:
- Stripped: minimalistic page without menu
- Website: menu at the top
- Workspace: menu on the left
Change the menus here:
The platform supports pluggable backends for global (non-study-specific) temporary file storage using the Systems.Storage.BuiltIn.Special interface. This storage is used, for example, in data donation workflows.
Although these files are temporary, the system must guarantee that they are stored reliably. We recommend using production-ready backends such as S3 or Azure Blob.
Systems.Storage.BuiltIn.S3Systems.Storage.BuiltIn.LocalFS
You can configure the backend via environment variables. The active backend is set via:
STORAGE_BUILTIN_SPECIAL=s3 # or my_backendTo add a custom backend, define a module implementing the following behaviour:
defmodule Systems.Storage.BuiltIn.Special do
@callback store(
folder :: binary(),
identifier :: list(tuple()) | binary(),
data :: binary()
) :: any()
@callback list_files(folder :: binary()) :: list()
@callback delete_files(folder :: binary()) :: :ok | {:error, atom()}
endYou can add your implementation in:
core/systems/storage/builtin/my_backend.ex
Minimal example:
defmodule Systems.Storage.BuiltIn.MyBackend do
@behaviour Systems.Storage.BuiltIn.Special
def store(folder, filename, data) do
# Your custom storage logic here
end
def list_files(_folder), do: []
def delete_files(_folder), do: :ok
# Configuration
defp config do
Application.fetch_env!(:core, Systems.Storage.BuiltIn.MyBackend)
end
defp var_1 do
Access.get(config(), :var_1, 256)
end
defp var_2 do
Access.get(config(), :var_2, "default")
end
defp var_n do
Access.get(config(), :var_n, "https://mybackend.com") |> URI.parse()
end
endThe list_files/1 and delete_files/1 functions can be implemented as no-ops if the platform’s file export functionality in the user interface is not used, and files are instead accessed directly at the final storage location — for example, when using Yoda.
Prevent any hardcoded variable but use the Elixir configuration system to retreive runtime values.
An S3 example can be found in:
core/systems/storage/builtin/s3.ex
Below a code snippet:
def store(folder, filename, data) do
filepath = Path.join(folder, filename)
object_key = object_key(filepath)
content_type = content_type(object_key)
bucket = Access.fetch!(settings(), :bucket)
S3.put_object(bucket, object_key, data, content_type: content_type)
|> backend().request!()
endTo activate and configure a storage backend, you must modify the core/config/runtime.exs file.
The current default runtime configuration is to use Systems.Storage.BuiltIn.S3 when there is a
STORAGE_S3_PREFIX environment variable configurated. Fallback storage is Systems.Storage.BuiltIn.LocalFS.
if storage_s3_prefix = System.get_env("STORAGE_S3_PREFIX") do
config :core, Systems.Storage.BuiltIn, special: Systems.Storage.BuiltIn.S3
config :core, Systems.Storage.BuiltIn.S3,
bucket: System.get_env("AWS_S3_BUCKET"),
prefix: storage_s3_prefix
else
config :core, Systems.Storage.BuiltIn, special: Systems.Storage.BuiltIn.LocalFS
endThat config can be replace by:
if my_backend = System.get_env("STORAGE_BUILTIN_SPECIAL") do
config :core, Systems.Storage.BuiltIn, special: String.to_atom(my_backend)
end
config Systems.Storage.BuiltIn.MyBackend,
var_1: System.get_env("STORAGE_BUILTIN_MYBACKEND_VAR1") |> String.integer(),
var_2: System.get_env("STORAGE_BUILTIN_MYBACKEND_VAR2")
var_n: System.get_env("STORAGE_BUILTIN_MYBACKEND_VARN")Your environent variables should contain something like this:
STORAGE_BUILTIN_SPECIAL=Systems.Storage.Builtin.MyBackend
STORAGE_BUILTIN_MYBACKEND_MYVAR1=1024
STORAGE_BUILTIN_MYBACKEND_MYVAR2=string value
STORAGE_BUILTIN_MYBACKEND_MYVARN=https://client1.mybackend.comWe plan to support file transfer from the built-in storage to external systems (e.g., Yoda) via the user interface. Until then, this must be done manually or with automation.
To prevent users from exhausting resources on external services, Core uses rate limiters. The local configuration of rate limiters can be found in core/config/dev.exs:
config :core, :rate,
prune_interval: 5 * 1000,
quotas: [
[service: :azure_blob, limit: 1, unit: :call, window: :second, scope: :local],
[service: :azure_blob, limit: 100, unit: :byte, window: :second, scope: :local]
].. and the production configuration can be found in core/config/config.exs:
config :core, :rate,
prune_interval: 60 * 60 * 1000,
quotas: [
[service: :azure_blob, limit: 1000, unit: :call, window: :minute, scope: :local],
[service: :azure_blob, limit: 10_000_000, unit: :byte, window: :day, scope: :local],
[service: :azure_blob, limit: 1_000_000_000, unit: :byte, window: :day, scope: :global]
]- Create a Docker image
$ cd core
$ docker build --build-arg VERSION=1.0.0 --build-arg BUNDLE=self . -t self:latest
$ docker image save self -o self.zip- Run the Docker image
Required environment variables:
| Variable | Description | Example value |
|---|---|---|
| APP_NAME | Core app name | "Self" |
| APP_DOMAIN | domain where the Core app is hosted | "my.server.com" |
| APP_MAIL_DOMAIN | Domain of your email (after the @) | "self.com" |
| APP_ADMINS | String with space seperated email adresses of the system admins, supports wildcards | "person1@self.com person2@self.com" |
| DB_USER | Username | <my-username> |
| DB_PASS | Password | <my-password> |
| DB_HOST | Hostname | "domain.where.database.lives" |
| DB_NAME | Name of the database in the PostgreSQL | "self_prod" |
| SECRET_KEY_BASE | 64-bit sequence of random characters | <long-sequence-of-characters> |
| STATIC_PATH | Path to folder where uploaded files can be stored | "/tmp" |
| UNSPLASH_ACCESS_KEY | Application access key registered on Unsplash (Image Catalog) | "hcejpnHRuFWL-fKXLYqhGBt1Dz0_tTjeNifgD01VkGE" |
| UNSPLASH_APP_NAME | Application name registered on Unsplash (Image Catalog) | "Self" |
| STORAGE_SERVICES | Comma seperated list of storage services | "yoda, aws, azure" |
Optional environment variables:
| Variable | Description | Example value |
|---|---|---|
| LOG_LEVEL | Console log level | "debug", "info", "warn", "error" |
| SENTRY_DSN | app monitoring | https://1234febac1234365cfe2d1fad616845b@o1234721120555008.ingest.sentry.io/1235721234883520" |
| GOOGLE_SIGN_IN_CLIENT_ID | "123466465353-mui7en8912341rpn6qaevb89rd01234.apps.googleusercontent.com" | |
| GOOGLE_SIGN_IN_CLIENT_SECRET | "Q_lSWMy1234nPhxof1234Xyc" | |
| SURFCONEXT_SITE | SURFconext site | "https://connect.test.surfconext.nl" |
| SURFCONEXT_CLIENT_ID | SURFconext client ID | "self.com" |
| SURFCONEXT_CLIENT_SECRET | SURFconext client secret | "12343HieOjb1234hcBpL" |
| STORAGE_S3_PREFIX | Prefix for S3 builtin storage objects. Without this variable "builtin" storage service will default to local filesystem | "storage" |
| CONTENT_S3_PREFIX | Prefix for S3 content objects | "content" |
| FELDSPAR_S3_PREFIX | Prefix for S3 feldspar objects | "feldspar" |
| PUBLIC_S3_URL | Public accessable url of an S3 service | "https://self-public.s3.eu-central-1.amazonaws.com" |
| PUBLIC_S3_BUCKET | Name of the bucket on the S3 service | "self-prod" |
| DIST_HOSTS | Comma seperated list of hosts in the cluster, see: OTP Distribution | "one, two" |
| ENABLED_OBAN_PLUGINS | Comma seperated list of Oban plugins to enable, only pruner and lifeline supported | "pruner, lifeline" |