Skip to content
This repository was archived by the owner on Jan 31, 2022. It is now read-only.
This repository was archived by the owner on Jan 31, 2022. It is now read-only.

RPC framework for the GLIB #156

@lpetre-ulb

Description

@lpetre-ulb

With the latest developments, the GLIB back-end board can now be used with the GE1/1 v3 electronics and the vanilla GEM software.

The GEM_AMC firmware is ported to the GLIB FPGA and the CTP7 ZYNQ software is emulated with the container provided here: https://gitlab.cern.ch/lpetre/gem_glib_sw

The container image is built with CERN GitLab CI and stored on the CERN GitLab registry. A docker-compose.yml file is provided as example in the Git repository. Documentation is slightly outdated and is going to be updated. The documentation should be usable, but needs further improvements.

Improvements aiming at easing the usage will be published when available. Bugs must also be expected although the latest tests were very encouraging.

This first beta version is opened for discussion and suggestion.

IP address assignment

Contrary to the CTP7, the GLIB can only be assigned a single IP address. Different means exist for setting the IP address: based on the µTCA slot number, statically assigned in the firmware or the EEPROM, through the MMC via IPMI or via RARP.

The simplest IP addressing scheme is to use a prefix plus the µTCA slot number. This solution is perfectly fine as long as there is the guarantee that two GLIBs will never be in the same slot in two different µTCA crates in the same subnetwork.

The other main difference is that the RPC server container (aka CTP7 emulator) must also be provided a IP address. For now, the GLIBs have the IP addresses 192.168.80.x and the containers the IP addresses 192.168.81.x where x is the slot number where the GLIB is installed.

As a consequence, the RPC calls and IPBus transactions use two different paths:

RPC:   DAQ machine -> container `gem-shelfXX-amcYY`(hostname)/`192.168.81.YY` -> controlhub(container or native service) -> GLIB (`192.168.80.YY`)
IPBus: DAQ machine -> `gem.shelfXX.amcYY`(connections.xml) -> controlhub(container or native service) -> GLIB (`192.168.80.YY`)

Container configuration

For now, the container is set up at ULB with two IP addresses on two interfaces:

  1. A macvlan interface to access the µTCA sub-network with the 192.168.80.x IP address.
  2. A veth network interface connected to a bridge which NATs to the public network.

The bridge network interface can obviously be removed if one does not need to access a public network from the container or if the gateway 192.168.0.180 is configured to NAT.

The macvlan has the advantage to allow access from remote machine with multiple containers on the same machine. However, the host IP address must be moved from the physical network interface(or previously used virtual network interface) to a virtual macvlan interface (not supported by ifcfg configuration files).

If access is not required from remote machines, using a simple bridge might be an easier solution.

Compilation

Currently, xHAL, reedmuller-c and ctp7_modules are compiled inside the image, modifying the Makefiles so that the libraries build successfully. The output binaries are then copied by hand to their final location (in a CTP7-like filesystem structure).

Note the the optical.so RPC module cannot be built on another back-end than the CTP7. Since it wouldn't have any utility on the GLIB, the module is simply disabled.

DAQ software

Except a segfault (PR with a fix to come), the vanilla testConnectivity.py runs seamlessly. The last known issue with the GBT RX phase scan is now fixed in the firmware.

Update: New memory error (double free) found during the Sbit rate vs THR_ARM_DAC scan. Solution found, PR to come.

Speed

Since each memory-mapped transaction (~10µs) is converted to a IPBus packet containing a single IPBus transaction (~100µs), all operations run more slowly. The factor 10 is however mitigated by the fact that time in not only taken by registers transactions, but also wait times, ...

For example, testConnectivity.py is "only" 2-3 times longer with the GLIB than with the CTP7.

Also, the libmemsvc <-> IPBus wrapper creates a lot of very small IP packets. On the Docker host used for the tests, the limiting factor seems to be the CPU, eaten up by controlhub. I think it is worth evaluating the performances with a faster computer.

Context

Support the GLIB for, among others, QC7 operations. Currently, no support is provided for any other back-end than the CTP7.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions