tls ca distribution to OIDC login users + easier login#180
Conversation
📝 WalkthroughWalkthroughAdds a new HTTP login service and CLI discovery flow, exposes configurable login endpoints (Route/Ingress/NodePort) with edge TLS, extends operator types/CRD/Helm charts for login configuration, and integrates login service startup plus OIDC config extraction into the controller and CI/e2e flows. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI (user)
participant LoginSvc as Login Service
participant OIDC as OIDC Provider
participant Control as Control Plane
CLI->>LoginSvc: GET /v1/auth/config
LoginSvc-->>CLI: AuthConfig {grpcEndpoint, routerEndpoint, namespace, OIDC issuers, caBundle}
CLI->>CLI: parse_login_argument(login_target)
CLI->>OIDC: Initiate OIDC auth (browser/device flow)
OIDC-->>CLI: Authorization code / token
CLI->>Control: Present token / client config
Control-->>CLI: Grant access / return client config
sequenceDiagram
participant Operator as Operator Controller
participant CM as ConfigMap (jumpstarter-controller)
participant API as API Server (JWT authenticators)
participant LoginMgr as Login Service Manager
participant Kube as Kubernetes API
Operator->>CM: Read jumpstarter-controller config
CM-->>Operator: Config (new or legacy)
Operator->>API: List JWT authenticators
API-->>Operator: JWT authenticators
Operator->>Operator: jwtAuthenticatorsToOIDCConfigs()
Operator->>LoginMgr: NewServiceFromEnv() + SetOIDCConfig()
Operator->>Kube: Reconcile login Ingress/Route/Service
Kube-->>Operator: Resources created/updated
Operator->>LoginMgr: Start(ctx) via manager
LoginMgr-->>Operator: HTTP server listening (8086)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
65220b3 to
aed490a
Compare
| // Fall back to legacy authentication format | ||
| rawAuth, ok := configmap.Data["authentication"] | ||
| if ok { | ||
| var auth config.Authentication | ||
| if err := yaml.Unmarshal([]byte(rawAuth), &auth); err == nil { | ||
| return jwtAuthenticatorsToOIDCConfigs(auth.JWT) | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
we don't need the legacy format anymore, remove this.
| // extractOIDCConfigs reads the OIDC configuration from the ConfigMap | ||
| func extractOIDCConfigs(reader client.Reader, namespace string) []login.OIDCConfig { | ||
| var configmap corev1.ConfigMap | ||
| if err := reader.Get(context.Background(), client.ObjectKey{ | ||
| Namespace: namespace, | ||
| Name: "jumpstarter-controller", | ||
| }, &configmap); err != nil { | ||
| setupLog.Error(err, "unable to read configmap for OIDC config") | ||
| return nil | ||
| } |
There was a problem hiding this comment.
It appears that the login service reads the OIDC configuration from the jumpstarter-controller ConfigMap only once at startup and does not refresh it afterward. If the configuration is updated later (for example a new issuer or audience), what will happen? It looks like /v1/auth/config would remain stale until the controller is restarted. We may need to add a watch mechanism to detect and apply ConfigMap changes dynamically (or some kind of documentation to perform a restart).
but It’s also possible that I’m missing an existing refresh path here, so please let me know if this is relevent :)
There was a problem hiding this comment.
I think you're right. This is a problem already with the general OIDC config I believe. I agree that a watching mechanism should help here. Address the previous bug (OIDC auth is not being refreshed for configmap changes), but also, that now we don't expose updated information on the endpoint. I will add the watcher! thank you :)
There was a problem hiding this comment.
Ok, I am having issues with this part, since it was a known issue, I will open a bug, and let's fix afterwards as a bug.
Opened: #196
I hope it sounds ok.
| pathTypePrefix := networkingv1.PathTypePrefix | ||
|
|
||
| // Build TLS secret name from service name if not provided via annotations | ||
| tlsSecretName := serviceName + "-tls" |
There was a problem hiding this comment.
If we're using hardcoded -tls, do we have an option to use any custom TLS secrets?
There was a problem hiding this comment.
That's true,
We don't support yet any form to provide those secrets, if you use the cert-manager annotations in the ingress, cert-manager will create the secret itself.
https://cert-manager.io/docs/usage/ingress/
We should probably document that as being necessary, and probably not inject this "tlsSecret" in the ingress resource if certManager integration is disabled.
Probably we can also accept a custom tls secret name for the login ingress?
There was a problem hiding this comment.
Agree on the documentation, we should mention that the annotations are required.
On custom TLS secret names, I think it’s a good addition for flexibility, but I’m also fine with doing it in a future PR if you prefer.
Add a new LoginService that exposes: - GET / - Landing page with login instructions - GET /v1/auth/config - JSON API returning auth configuration The service provides: - gRPC endpoint address - Router endpoint address - Namespace - CA certificate bundle (for secure TLS) - OIDC provider configuration (issuers, client IDs, audiences) The service runs on port 8086 (plain HTTP) and is designed to have TLS terminated at the Route/Ingress level via edge termination. This enables a simplified login flow: jmp login username@login.jumpstarter.example.com Instead of the previous complex command with many parameters. Co-authored-by: Cursor <cursoragent@cursor.com>
Add LoginConfig type to the Jumpstarter CRD for configuring the
login endpoint. The login service provides authentication
configuration discovery for simplified CLI login.
Key characteristics:
- Uses existing Endpoint type for flexible networking (Route, Ingress,
NodePort, LoadBalancer)
- TLS is handled at the Route/Ingress level via edge termination
(not passthrough like gRPC endpoints)
- Admin configures TLS via annotations/labels on endpoints
This enables configuration like:
spec:
controller:
login:
endpoints:
- address: login.example.com
route:
enabled: true
annotations:
# admin-provided TLS annotations
Co-authored-by: Cursor <cursoragent@cursor.com>
Add support for reconciling login endpoints in the operator:
1. Controller deployment changes:
- Add port 8086 for login service
- Add CA_BUNDLE_PEM env var from CA ConfigMap (for login API response)
2. Service reconciliation:
- Add ReconcileLoginEndpoint method for login endpoints
- Creates ClusterIP service pointing to port 8086
3. Route/Ingress with edge TLS termination:
- createRouteForLoginEndpoint: Uses TLSTerminationEdge (not Passthrough)
- createIngressForLoginEndpoint: Standard HTTPS (no ssl-passthrough)
- Admin configures TLS via annotations/labels
This enables login endpoint configuration like:
spec:
controller:
login:
endpoints:
- address: login.example.com
route:
enabled: true
Co-authored-by: Cursor <cursoragent@cursor.com>
Add Helm chart support for the login endpoint: 1. Values configuration: - login.enabled: Enable the login endpoint - login.hostname: Custom hostname for login endpoint - login.route: OpenShift Route configuration with edge TLS - login.ingress: Kubernetes Ingress configuration with TLS 2. New templates: - login-service.yaml: ClusterIP service on port 8086 - login-route.yaml: Route with edge TLS termination - login-ingress.yaml: Ingress with TLS 3. Deployment updates: - Add CA_BUNDLE_PEM env var from CA ConfigMap - Add GIN_MODE=release env var 4. Schema updates: - Update model.py with Login configuration types - Regenerate values.schema.json for both charts Co-authored-by: Cursor <cursoragent@cursor.com>
Add support for simplified login format that fetches configuration
from the login service:
jmp login myuser@login.jumpstarter.example.com
Changes:
- Add fetch_auth_config() to retrieve config from /v1/auth/config API
- Add parse_login_argument() to parse [username@]endpoint format
- Add LOGIN_TARGET positional argument to login command
- Auto-populate endpoint, namespace, issuer from fetched config
- Store CA certificate from login service in client TLS config
The login endpoint provides:
- gRPC endpoint address
- Router endpoint address
- Default namespace
- CA certificate bundle (for secure TLS)
- OIDC provider configuration (issuers, client IDs, audiences)
This simplifies the login process from:
jmp login --endpoint grpc.example.com:443 --namespace jumpstarter \
--name myuser --issuer https://auth.example.com
To: jmp login myuser@login.example.com
Co-authored-by: Cursor <cursoragent@cursor.com>
Add E2E tests for the login endpoint: 1. login endpoint returns auth config - Tests /v1/auth/config API returns expected fields - Verifies grpcEndpoint and namespace are present 2. login endpoint serves landing page - Tests the landing page HTML is served - Verifies it contains Jumpstarter branding and login instructions 3. can login using simplified format via login endpoint - Tests the new jmp login [user@]endpoint syntax - Verifies client config is created with fetched settings All tests gracefully skip if the login endpoint is not configured, allowing them to coexist with existing deployments that don't have the login service enabled. Co-authored-by: Cursor <cursoragent@cursor.com>
When cert-manager is enabled, we know the CA ConfigMap will be created, so the CA_BUNDLE_PEM environment variable reference should not be optional. This ensures the controller pod will wait for the ConfigMap to be available rather than starting without it. - Optional: true -> when cert-manager is disabled (manual TLS setup) - Optional: false -> when cert-manager is enabled (CA ConfigMap guaranteed) Co-authored-by: Cursor <cursoragent@cursor.com>
Add NodePort support for the login service to enable E2E testing: 1. Helm chart updates: - Add login.nodeport.enabled and login.nodeport.port to values.yaml - Add login.endpoint for announcing the endpoint to clients - Update login-service.yaml to support NodePort type - Update model.py and regenerate values.schema.json 2. Kind cluster configuration: - Add port mapping 30014 -> 8086 for login NodePort 3. Deploy scripts: - Update deploy_vars with LOGIN_ENDPOINT variable - Update deploy_with_helm.sh to configure login via NodePort or Ingress - Update deploy_with_operator.sh to add login endpoint configuration This enables the login endpoint to be accessible in local Kind clusters for E2E testing with the simplified jmp login command. Co-authored-by: Cursor <cursoragent@cursor.com>
The login.endpoint field was not being used by the LoginService. The client already knows the login endpoint since that's how they call the API in the first place. Co-authored-by: Cursor <cursoragent@cursor.com>
- Calculate LOGIN_ENDPOINT in setup-e2e.sh based on deployment method - Write LOGIN_ENDPOINT to .e2e-setup-complete for use in tests - Update tests.bats to use LOGIN_ENDPOINT when available, fall back to discovering from routes/ingresses This allows the simplified login test to work with NodePort deployments where there's no route or ingress, just a direct NodePort exposure. Co-authored-by: Cursor <cursoragent@cursor.com>
The createService function was not using the endpoint.NodePort.Port value when creating NodePort services. This caused NodePort services to be assigned random ports instead of the configured values. Fixed all three reconciliation functions: - ReconcileEndpoint (gRPC controller endpoints) - ReconcileRouterReplicaEndpoint (router endpoints) - ReconcileLoginEndpoint (login endpoints) Also explicitly set nodePort in deploy_with_helm.sh for the login service. This fixes the issue where the login service was assigned a random NodePort (e.g., 29289) instead of the configured 30014 which is mapped to host port 8086 in the Kind cluster configuration. Co-authored-by: Cursor <cursoragent@cursor.com>
Two issues were fixed: 1. Reconciliation order: The CA ConfigMap was being created before the cert-manager Certificate resources, meaning the CA secret didn't exist yet. Now certificates are reconciled first, then the CA ConfigMap. 2. Timing with deployment: The deployment was created before the certificates were reconciled. Now reconcileCertificates runs before reconcileControllerDeployment, ensuring the CA ConfigMap has the correct value when the pod starts. 3. Error handling: When the CA secret is expected but not found (cert-manager self-signed mode), the reconciler now returns an error to trigger a requeue instead of creating an empty ConfigMap. This causes the deployment to wait since the ConfigMap env var reference is not optional. This ensures the CA_BUNDLE_PEM environment variable is properly populated when the login service starts, and the /v1/auth/config endpoint returns the caBundle field. Co-authored-by: Cursor <cursoragent@cursor.com>
The Python CLI's grpc.ssl_channel_credentials() expects the CA bundle to be base64 encoded (it calls base64.b64decode on the ca field). Update the login service to base64 encode the CA bundle before returning it in the /v1/auth/config JSON response. Co-authored-by: Cursor <cursoragent@cursor.com>
- Infer --client from username part of login_target (user@endpoint)
When no --client is specified and login_target has a username,
use it as the client alias for the new config
- Add --insecure-login-tls flag to skip TLS verification when
fetching configuration from the login endpoint
- Add --insecure-login-http flag to use HTTP instead of HTTPS
when fetching configuration (for local testing with NodePort)
- Update fetch_auth_config to support both insecure options
- Update opt_config to return ("client", "default") when allow_missing
is True and no config is found, instead of raising an error
This allows simplified login commands like:
jmp login myuser@login.example.com --insecure-login-http
Co-authored-by: Cursor <cursoragent@cursor.com>
Fix incomplete test in "can login with simplified login" to properly verify that the tls.ca field is populated in both client and exporter config files after using the simplified login format. The login endpoint should provide the CA certificate, and the CLI should write it to the config files for secure TLS connections. Co-authored-by: Cursor <cursoragent@cursor.com>
Use --name="" syntax instead of --name "" to ensure the empty string argument is properly passed to click without shell parsing issues that could cause the issuer URL to be treated as a positional argument. Co-authored-by: Cursor <cursoragent@cursor.com>
Add LoginEndpoint configuration to the login service so the landing
page can display the correct jmp login command with the actual login
endpoint URL instead of the gRPC endpoint.
Changes:
- Add LoginEndpoint to login service Config struct and template data
- Update landing page template to use {{.LoginEndpoint}}
- Add LOGIN_ENDPOINT env var to operator controller deployment
- Add login.endpoint to Helm chart values and controller deployment
- Update deploy scripts to set LOGIN_ENDPOINT variable
Co-authored-by: Cursor <cursoragent@cursor.com>
Remove the unnecessary hostname fallback - if users need to set the login endpoint, they should use the explicit endpoint field. The hostname field is for Route/Ingress configuration, not for endpoint display. Co-authored-by: Cursor <cursoragent@cursor.com>
Automatically set the newly created/updated client as the default after a successful OIDC login, eliminating the need for users to manually run `jmp config client use <name>`. Co-authored-by: Cursor <cursoragent@cursor.com>
Add assertion to check that after simplified login, the new client is automatically set as the default (marked with * in CURRENT column). Co-authored-by: Cursor <cursoragent@cursor.com>
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
python/packages/jumpstarter-cli/jumpstarter_cli/login.py (1)
148-238:⚠️ Potential issue | 🟠 MajorApply the fetched CA bundle to existing configs.
The CA bundle is only applied when creating new configs; existing client/exporter configs skip TLS configuration entirely, so fetched CAs are dropped. The base64 encoding/decoding is already handled correctly by
ssl_channel_credentials, so no additional decoding is needed.Apply the CA to existing configs after parsing the login endpoint:
Suggested fix
- # Apply fetched CA to existing configs, not only new ones - if ca_bundle and isinstance(config, (ClientConfigV1Alpha1, ExporterConfigV1Alpha1)): - if getattr(config, "tls", None) is None: - config.tls = TLSConfigV1Alpha1(insecure=insecure_tls_config, ca=ca_bundle) - else: - config.tls.ca = ca_bundle
🤖 Fix all issues with AI agents
In `@controller/deploy/helm/jumpstarter/charts/jumpstarter-controller/model.py`:
- Around line 108-159: The Login model is missing a top-level tls model so
values like login.tls.secretName are rejected; add a new LoginTls (or LoginTLS)
Pydantic model similar to LoginIngressTls with model_config =
ConfigDict(extra="forbid") and a secretName: Optional[str] Field, then add tls:
Optional[LoginTls] = None to the Login class (alongside route, ingress,
nodeport) and regenerate the JSON schema/CRD; ensure you reference the existing
types Login, LoginIngressTls, LoginIngress, LoginNodeport, and Port when
updating and keep the extra="forbid" behavior consistent.
In
`@controller/deploy/helm/jumpstarter/charts/jumpstarter-controller/templates/controller-deployment.yaml`:
- Around line 68-73: The LOGIN_ENDPOINT env value currently falls back to
"login.{{ .Values.global.baseDomain }}" when .Values.login.endpoint is unset,
which omits the NodePort port for NodePort-only setups; update the
controller-deployment.yaml template to check .Values.login.nodeport.enabled and,
when true and .Values.login.endpoint is empty, append the NodePort port (30014)
to the fallback hostname (e.g., "login.<baseDomain>:30014") so the
LOGIN_ENDPOINT env (symbol LOGIN_ENDPOINT) includes the port, or alternatively
add a clear comment/documentation in the template indicating that
.Values.login.endpoint must be explicitly set when using login.nodeport.enabled.
- Around line 94-101: The CA_BUNDLE_PEM env var uses a configMapKeyRef to
jumpstarter-service-ca-cert with optional: true which causes silent TLS failures
when cert-manager is enabled but the configmap isn’t created; update the
template around the CA_BUNDLE_PEM configMapKeyRef so that optional is
conditional on a Helm value (e.g. .Values.certManager.enabled) — set optional:
true only when cert-manager is disabled, otherwise require the configmap
(optional: false), and add/adjust chart values documentation to instruct users
to pre-create jumpstarter-service-ca-cert when deploying via Helm if
cert-manager won't supply it.
In `@controller/deploy/helm/jumpstarter/model.py`:
- Around line 46-53: The Login Pydantic model is missing model_config =
ConfigDict(extra="forbid") which allows arbitrary extra fields and makes the
generated JSON schema omit additionalProperties; update the Login class (class
Login) to include model_config = ConfigDict(extra="forbid") to match Metrics and
Global models so unexpected keys are rejected, and optionally decide whether to
add a tls field if the schema should mirror values.yaml (document the decision
in the model if you keep it simplified).
In `@controller/internal/service/login/service.go`:
- Line 139: The shutdown timeout uses context.WithTimeout(context.Background(),
5) which is interpreted as 5 nanoseconds; update the call to provide an explicit
duration (e.g., change the timeout to 5*time.Second or another appropriate unit)
by modifying the context.WithTimeout invocation that returns shutdownCtx, cancel
in service.go, and add the "time" package to the imports if it's not already
present.
- Around line 96-99: SetOIDCConfig mutates s.config.OIDC without synchronization
which can race with readers like handleAuthConfig and handleLandingPage; add a
mutex field (e.g., mu sync.RWMutex) to Service and wrap writes in SetOIDCConfig
with mu.Lock()/Unlock(), and protect reads of s.config.OIDC in handleAuthConfig
and handleLandingPage with mu.RLock()/RUnlock() so concurrent access is safe
(alternatively, if you prefer not to add locking, document that SetOIDCConfig
must only be called before Start() and enforce that via code/comments).
In `@controller/internal/service/login/templates/index.html`:
- Around line 230-234: Update the external anchor elements in the footer (the
<div class="footer"> anchors linking to "https://jumpstarter.dev" and
"https://github.com/jumpstarter-dev/jumpstarter" that use target="_blank") to
include rel="noopener noreferrer"; locate these anchor tags in the footer of the
login template (index.html) and add the rel attribute to each external link to
prevent reverse-tabnabbing.
In `@e2e/setup-e2e.sh`:
- Around line 311-319: The BASEDOMAIN assignment uses `local BASEDOMAIN=$(...)`
which masks failures under set -e; change to declare then assign separately so
command errors propagate: replace `local BASEDOMAIN=$(kubectl ...)` and `local
BASEDOMAIN=$(helm ...)` with a separate `local BASEDOMAIN` (or just
`BASEDOMAIN=`) followed by `BASEDOMAIN=$(kubectl ...)` and `BASEDOMAIN=$(helm
...)` respectively, and keep exports for ENDPOINT and LOGIN_ENDPOINT unchanged
so an error from `kubectl` or `helm`/`jq` will cause the script to exit rather
than produce empty endpoints.
In `@e2e/tests.bats`:
- Around line 203-204: The delete step using "jmp config client delete
test-client-oidc" is failing tests when the client is already absent; make the
cleanup idempotent by changing the test to tolerate a missing config — either
check existence first (e.g., call "jmp config client list" and only run "jmp
config client delete test-client-oidc" if present) or run the delete command in
a non-failing way (e.g., append a shell-safe fallback such as "|| true" or
redirect errors) so "jmp config client delete test-client-oidc" cannot cause the
test to exit non‑zero when the client does not exist.
In `@python/packages/jumpstarter-cli/jumpstarter_cli/login.py`:
- Around line 23-57: The fetch_auth_config function currently opens
aiohttp.ClientSession() without a timeout, allowing the CLI to hang; update it
to create the session with an aiohttp.ClientTimeout (e.g., 30 seconds) and pass
that timeout into aiohttp.ClientSession(timeout=...) so all awaits (session.get
and response.json()) are bounded; locate the session creation in
fetch_auth_config and replace/open it with a ClientSession configured with
aiohttp.ClientTimeout(total=30) (or a shared constant) to enforce a request
timeout.
controller/deploy/helm/jumpstarter/charts/jumpstarter-controller/model.py
Show resolved
Hide resolved
...r/deploy/helm/jumpstarter/charts/jumpstarter-controller/templates/controller-deployment.yaml
Show resolved
Hide resolved
...r/deploy/helm/jumpstarter/charts/jumpstarter-controller/templates/controller-deployment.yaml
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@controller/deploy/helm/jumpstarter/charts/jumpstarter-controller/model.py`:
- Around line 367-369: The Login model is missing a tls field causing schema
mismatch with the child chart; update the Login class to include a tls attribute
(e.g. tls: Optional[TLS] = Field(None, description="TLS configuration for the
login endpoint")) and either import or define the TLS model to match the child
chart's structure so parent chart validation accepts login.tls; ensure the new
TLS type matches the child chart's fields and update any imports in model.py
accordingly.
In `@e2e/tests.bats`:
- Around line 126-127: Replace hardcoded use of "@latest" for go-run-invocations
of yq (e.g., the command "go run github.com/mikefarah/yq/v4@latest" found in the
tests) and the cfssl/cfssljson installers with pinned version variables; add
top-level constants like YQ_VERSION and CFSSL_VERSION in the e2e setup/test
scripts (e.g., setup-e2e.sh and tests.bats), use them in the "go run
github.com/mikefarah/yq/v4@${YQ_VERSION}" invocations and in cfssl/cfssljson
download/install commands, and update every occurrence mentioned (tests.bats
occurrences and setup-e2e.sh occurrences) to reference these variables so e2e
tooling is deterministic.
🧹 Nitpick comments (1)
e2e/tests.bats (1)
85-90: Usecurl -fsSso HTTP errors fail the test.
curl -sreturns exit code 0 even on 4xx/5xx, which can mask failures. Consider failing fast on non‑2xx responses.✅ Suggested tweak
- run curl -s http://${LOGIN_ENDPOINT} + run curl -fsS "http://${LOGIN_ENDPOINT}"
controller/deploy/helm/jumpstarter/charts/jumpstarter-controller/model.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In
`@controller/deploy/helm/jumpstarter/charts/jumpstarter-controller/templates/controller-deployment.yaml`:
- Around line 68-72: The LOGIN_ENDPOINT template currently only checks
.Values.login.endpoint and then falls back to "login.<baseDomain>", which
ignores .Values.login.hostname; update the conditional for the env var BUILD of
LOGIN_ENDPOINT to: if .Values.login.endpoint use it, else if
.Values.login.hostname use that value (or prepend "login." if hostname is meant
to be just a host label), otherwise fall back to "login.<
.Values.global.baseDomain >"; modify the template block that sets LOGIN_ENDPOINT
to honor .Values.login.hostname before using .Values.global.baseDomain.
In `@e2e/setup-e2e.sh`:
- Around line 319-323: The command substitution for ENDPOINT is currently
combined with export which masks failures under set -euo pipefail; change the
line using ENDPOINT (export ENDPOINT=$(helm get values jumpstarter --output json
| jq -r '."jumpstarter-controller".grpc.endpoint')) to first assign ENDPOINT
using the helm/jq pipeline and then run export ENDPOINT so any non-zero exit
from the helm/jq pipeline will cause the script to fail; update the same pattern
if present for LOGIN_ENDPOINT/BASEDOMAIN to ensure command substitution failures
propagate.
In `@python/packages/jumpstarter-cli/jumpstarter_cli/login.py`:
- Around line 59-63: The code currently returns await response.json() inside the
async with session.get(...) block (in the function that fetches the auth
config), but if the server responds 200 with invalid/non-JSON payload
response.json() will raise a parsing error and bubble up; catch JSON parsing
errors (e.g., json.JSONDecodeError or ValueError) around the call to
response.json() and raise a click.ClickException with a clear message like
"Failed to parse auth config JSON from {url}: <error>" so callers get a friendly
error instead of a raw traceback.
🧹 Nitpick comments (2)
python/packages/jumpstarter-cli/jumpstarter_cli/login.py (1)
318-322: Consider making auto-default behavior optional.The feature to automatically set the newly logged-in client as default is convenient for single-client workflows but may surprise users managing multiple clients who don't want their default changed. Consider adding a
--no-set-defaultflag to opt out of this behavior.💡 Optional enhancement
`@click.option`( "--insecure-login-http", is_flag=True, help="Use HTTP instead of HTTPS when fetching config from login endpoint (for local testing).", default=False, ) +@click.option( + "--no-set-default", + is_flag=True, + help="Do not set this client as the default after login.", + default=False, +) `@opt_nointeractive`Then in the function body:
save_config() # Set the new client as the default if it's a client config - if config_kind in ("client", "client_config") and isinstance(config, ClientConfigV1Alpha1): + if config_kind in ("client", "client_config") and isinstance(config, ClientConfigV1Alpha1) and not no_set_default: user_config = UserConfigV1Alpha1.load_or_create() user_config.use_client(config.alias) click.echo(f"Set '{config.alias}' as the default client.")controller/internal/service/login/service.go (1)
135-138: Consider adding HTTP server timeouts for production hardening.The
http.ServerlacksReadTimeout,WriteTimeout, andIdleTimeoutsettings, which leaves it susceptible to slowloris-style attacks and potential connection exhaustion.♻️ Suggested improvement
server := &http.Server{ - Addr: port, - Handler: r, + Addr: port, + Handler: r, + ReadTimeout: 10 * time.Second, + WriteTimeout: 10 * time.Second, + IdleTimeout: 60 * time.Second, }
Improves our ability to login by introducing a new endpoint/url, instead of requiring a very long set of parameters to login (that now would also require that we provide the CA certificate), use a new endpoint that provides discovery for that, making it easier to login.
In addition we switch the the just-logged-in OIDC user to be the default user (avoiding the need for the subsequent jmp config client use $user after login)
Fixes-Issue: #62
Fixes-Issue: #63
Summary by CodeRabbit
New Features
Bug Fixes
Tests