Skip to content

Memory leak potential in CtrlplaneJobPoller.triggeredJobIds map #4

@coderabbitai

Description

@coderabbitai

Problem

The current implementation of CtrlplaneJobPoller class maintains a triggeredJobIds ConcurrentHashMap to track jobs that have been triggered. However, there is no cleanup mechanism for this map, which can lead to unbounded memory growth over time, especially in environments with high polling frequency and many jobs.

Impact

For installations that run for extended periods, this could eventually cause memory issues as the map continues to grow with each new triggered job.

Recommended Solution

Implement a cleanup mechanism to remove entries from the triggeredJobIds map. Since the goal is to integrate with Jenkins Pipelines, consider implementing one of the following approaches:

Option 1: Time-based retention

Add an expiration time for each job ID in the map and periodically clean up entries older than a configured retention period.

// Store timestamps with job IDs
private final ConcurrentHashMap<String, Long> triggeredJobIds = new ConcurrentHashMap<>();

// In execute() method, periodically clean up old entries
private void cleanupTriggeredJobs() {
    long currentTime = System.currentTimeMillis();
    long retentionPeriodMs = TimeUnit.HOURS.toMillis(24); // Configurable
    
    triggeredJobIds.entrySet().removeIf(entry -> 
        (currentTime - entry.getValue()) > retentionPeriodMs);
}

Option 2: Build status-based cleanup

Use Jenkins BuildListener to receive notifications when builds complete and remove the corresponding job IDs from the map.

// Implement a BuildListener
@Extension
public class CtrlplaneBuildListener implements RunListener<Run<?,?>> {
    
    @Override
    public void onCompleted(Run<?,?> run, TaskListener listener) {
        ParametersAction params = run.getAction(ParametersAction.class);
        if (params != null) {
            StringParameterValue jobIdParam = (StringParameterValue) params.getParameter("CTRLPLANE_JOB_ID");
            if (jobIdParam != null) {
                CtrlplaneJobPoller.removeTriggeredJobId(jobIdParam.getValue());
            }
        }
    }
}

Option 3: Bounded collection

Use a bounded collection like a LinkedHashMap with an access-order eviction policy to automatically evict old entries.

We recommend Option 2 for integration with Pipeline jobs, as it ensures timely cleanup without requiring additional configuration.

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions