Skip to content

Conversation

@Fiona-Waters
Copy link

@Fiona-Waters Fiona-Waters commented Oct 7, 2025

What this PR does / why we need it:

This PR introduces a unified ContainerBackend that automatically detects and uses either Docker or Podman for local training execution. This replaces the previous separate LocalDockerBackend and LocalPodmanBackend implementations with a single, cleaner abstraction. You can see the Docker and Podman implementations in separate commits.

This implementation tries Docker first, then falls back to Podman if Docker is unavailable. This can be overridden via ContainerBackendConfig.runtime to force a specific runtime ("docker" or "podman"). An error is raised if neither runtime is available.
Unit tests for the backend implementation have also been added. Examples for using Docker and Podman will be added to the Trainer repo later.

Manually testing on Mac I had to specify the container_host like so:
Docker via Colima container_host=f"unix://{os.path.expanduser('~')}/.colima/default/docker.sock"
Podman Desktop container_host=f"unix://{os.path.expanduser('~')}/.local/share/containers/podman/machine/podman.sock"

Which issue(s) this PR fixes (optional, in Fixes #<issue number>, #<issue number>, ... format, will close the issue(s) when PR gets merged):

Fixes ##114 and #108

Checklist:
I need to look at adding docs. A README has been included.

  • Docs included if any changes are user facing

briangallagher and others added 2 commits October 7, 2025 15:53
Signed-off-by: Brian Gallagher <briangal@gmail.com>
Signed-off-by: Fiona Waters <fiwaters6@gmail.com>
@google-oss-prow
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign electronic-waste for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Member

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this @Fiona-Waters!
As we discussed here: #111 (comment), can we consolidate Podman and Docker under single container backend ?
Given that those backend should have similar APIs, I think it would be better to consolidate them, similar to KFP: https://www.kubeflow.org/docs/components/pipelines/user-guides/core-functions/execute-kfp-pipelines-locally/#runner-dockerrunner

@Fiona-Waters
Copy link
Author

Thank you for this @Fiona-Waters! As we discussed here: #111 (comment), can we consolidate Podman and Docker under single container backend ? Given that those backend should have similar APIs, I think it would be better to consolidate them, similar to KFP: https://www.kubeflow.org/docs/components/pipelines/user-guides/core-functions/execute-kfp-pipelines-locally/#runner-dockerrunner

Thanks @andreyvelich I will look at updating the implementation.

@Fiona-Waters
Copy link
Author

@andreyvelich @astefanutti regarding comments on this PR and #111 this is what I propose:

We have 3 backends:

  • Kubernetes
  • Subprocess
  • Local Container

For the Local Container backend we automatically try Docker first, then Podman and then fallback to Subprocess if neither runtime is available. We use the adapter pattern with a container client adapter unified interface, and docker and podman specific calls are implemented in separate adapter classes.
There could also be an option where users can force a specific runtime for example:
LocalContainerBackendConfig(runtime="docker")
This implementation will make it easy to add support for other container runtimes in the future, if thats a possibility.
Please let me know what you think. Thanks!
cc @briangallagher

@andreyvelich
Copy link
Member

andreyvelich commented Oct 8, 2025

Sure, that looks great @Fiona-Waters!

fallback to Subprocess if neither runtime is available

Why do we need to fallback to subprocess ?
I would imagine we have 3 backend support, and user decide what they want to use:

KubernetesBackend()
ContainerBackend()
LocalProcessBackend()

In the ContainerBackend users can select:

 ContainerBackend(
  ContainerBackendConfig(container_runtime="docker")
)
or
 ContainerBackend(
  ContainerBackendConfig(container_runtime="podman")
)

@astefanutti
Copy link
Contributor

@Fiona-Waters that sounds to me. I agree the fallback logic may really apply to choose the default container runtime.

Other than that, I'd be inclined to drop the "Local" prefix entirely. Even Kubernetes could run local with KinD, and I doubt the SDK will ever do remote process.

@Fiona-Waters
Copy link
Author

Sure, that looks great @Fiona-Waters!

fallback to Subprocess if neither runtime is available

Why do we need to fallback to subprocess ? I would imagine we have 3 backend support, and user decide what they want to use:

KubernetesBackend()
ContainerBackend()
LocalProcessBackend()

In the ContainerBackend users can select:

 ContainerBackend(
  ContainerBackendConfig(container_runtime="docker")
)
or
 ContainerBackend(
  ContainerBackendConfig(container_runtime="podman")
)

Understood. Let me see what I can do. Thank you for the swift reply!

@Fiona-Waters
Copy link
Author

@Fiona-Waters that sounds to me. I agree the fallback logic may really apply to choose the default container runtime.

Other than that, I'd be inclined to drop the "Local" prefix entirely. Even Kubernetes could run local with KinD, and I doubt the SDK will ever do remote process.

Ok cool. Let me see what I can do. Thank you!

@Fiona-Waters Fiona-Waters changed the title feat: Add Podman backend and sync Docker backend implementation [WIP] feat: Add Podman backend and sync Docker backend implementation Oct 8, 2025
@Fiona-Waters Fiona-Waters force-pushed the podman-backend branch 3 times, most recently from 1f7c066 to bdde877 Compare October 10, 2025 15:48
Signed-off-by: Fiona Waters <fiwaters6@gmail.com>
@Fiona-Waters Fiona-Waters changed the title [WIP] feat: Add Podman backend and sync Docker backend implementation feat: Add Podman backend and sync Docker backend implementation Oct 10, 2025
@Fiona-Waters
Copy link
Author

@andreyvelich @astefanutti @briangallagher
I've updated the PR. Please take a look. Thanks

@Fiona-Waters Fiona-Waters changed the title feat: Add Podman backend and sync Docker backend implementation feat: Add ContainerBackend with Docker and Podman Oct 10, 2025
Signed-off-by: Fiona Waters <fiwaters6@gmail.com>
@astefanutti
Copy link
Contributor

/ok-to-test

)

# Store job in backend
self._jobs[job_name] = _Job(
Copy link
Contributor

@astefanutti astefanutti Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would that be possible to avoid relying on that in-memory "registry" and consistently rely on the state from the container runtime itself?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated to store metadata as labels on containers and networks allowing us to query the container runtime for all job information. See this commit - please let me know what you think. Thanks

… memory

Signed-off-by: Fiona Waters <fiwaters6@gmail.com>
Copy link
Contributor

@astefanutti astefanutti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fiona-Waters thanks for this awesome work!

That looks good to me overall.

/assign @kubeflow/kubeflow-sdk-team @briangallagher


from kubeflow.trainer.types import types as base_types

LOCAL_RUNTIMES_DIR = Path(__file__).parents[1] / "config" / "local_runtimes"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe:

Suggested change
LOCAL_RUNTIMES_DIR = Path(__file__).parents[1] / "config" / "local_runtimes"
LOCAL_RUNTIMES_DIR = Path(__file__).parents[1] / "config" / "container_runtimes"

otherwise could be confusing for local process backend?

Copy link
Member

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @Fiona-Waters!
I left my initial messages.

print("\n".join(TrainerClient().get_job_logs(name=job_id)))
```

## Local Development
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also include docs about Trainer local execution to the user guides ?
https://www.kubeflow.org/docs/components/trainer/user-guides/
you can also add info from the @szaher PR: #95

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WIP PR for this kubeflow/website#4221

@@ -0,0 +1,162 @@
# ContainerBackend
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest that we move this docs to the user guides for now: https://www.kubeflow.org/docs/components/trainer/user-guides/ as we discussed here: #95 (comment)

Comment on lines 93 to 124
if self.cfg.runtime:
# User specified a runtime explicitly
if self.cfg.runtime == "docker":
adapter = DockerClientAdapter(self.cfg.container_host)
adapter.ping()
logger.info("Using Docker as container runtime")
return adapter
elif self.cfg.runtime == "podman":
adapter = PodmanClientAdapter(self.cfg.container_host)
adapter.ping()
logger.info("Using Podman as container runtime")
return adapter
else:
# Auto-detect: try Docker first, then Podman
try:
adapter = DockerClientAdapter(self.cfg.container_host)
adapter.ping()
logger.info("Using Docker as container runtime")
return adapter
except Exception as docker_error:
logger.debug(f"Docker initialization failed: {docker_error}")
try:
adapter = PodmanClientAdapter(self.cfg.container_host)
adapter.ping()
logger.info("Using Podman as container runtime")
return adapter
except Exception as podman_error:
logger.debug(f"Podman initialization failed: {podman_error}")
raise RuntimeError(
"Neither Docker nor Podman is available. "
"Please install Docker or Podman, or use LocalProcessBackendConfig instead."
) from podman_error
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, this can be simplified as follows:

runtime_map = {
    "docker": DockerClientAdapter,
    "podman": PodmanClientAdapter,
}

def get_adapter(cfg):
    runtimes_to_try = [cfg.runtime] if cfg.runtime else ["docker", "podman"]

    last_error = None
    for runtime_name in runtimes_to_try:
        if runtime_name not in runtime_map:
            continue
        try:
            adapter = runtime_map[runtime_name](cfg.container_host)
            adapter.ping()
            logger.info(f"Using {runtime_name} as container runtime")
            return adapter
        except Exception as e:
            logger.debug(f"{runtime_name} initialization failed: {e}")
            last_error = e

    raise RuntimeError(
        "Neither Docker nor Podman is available. "
        "Please install Docker or Podman, or use LocalProcessBackendConfig instead."
    ) from last_error

Comment on lines 158 to 160
runtime: types.Runtime | None = None,
initializer: types.Initializer | None = None,
trainer: types.CustomTrainer | types.BuiltinTrainer | None = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't use | since we still support Python 3.9 for now. Let's be consistent across backends:

runtime: Optional[types.Runtime] = None,

@@ -0,0 +1,25 @@
apiVersion: trainer.kubeflow.org/v1alpha1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of installing the runtimes, can we just read the image version from GitHub dynamically ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me look into that. For offline support should we fall back to providing this?

image: Optional[str] = Field(default=None)
pull_policy: str = Field(default="IfNotPresent")
auto_remove: bool = Field(default=True)
gpus: Optional[Union[int, bool]] = Field(default=None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How this is used ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It allows you to override the default container image specified in the ClusterTrainingRuntime.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think actually you are referring to gpus. It was pulled over from a previous iteration but isn't being used currently. I can update to include GPU support.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused by this as there can be multiple ClusterTraininingRuntimes and it's defined by TrainJobs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies I understand now that it doesn't make sense to allow the image to be updated here as it is not per job. Will remove this. Thank you.

gpus: Optional[Union[int, bool]] = Field(default=None)
env: Optional[dict[str, str]] = Field(default=None)
container_host: Optional[str] = Field(default=None)
workdir_base: Optional[str] = Field(default=None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we initially use the default dir and if users require to configure it we will give them such option ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can remove it for now and use the default dir.

pull_policy: str = Field(default="IfNotPresent")
auto_remove: bool = Field(default=True)
gpus: Optional[Union[int, bool]] = Field(default=None)
env: Optional[dict[str, str]] = Field(default=None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we start a new container every time Job is submitted ?
If yes, this might be controlled via train() API.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right - similar to the image param. Will remove this. Thanks

env: Optional[dict[str, str]] = Field(default=None)
container_host: Optional[str] = Field(default=None)
workdir_base: Optional[str] = Field(default=None)
runtime: Optional[Literal["docker", "podman"]] = Field(default=None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to make it less confusing with TrainingRuntime, can we name it:

Suggested change
runtime: Optional[Literal["docker", "podman"]] = Field(default=None)
container_runtime: Optional[Literal["docker", "podman"]] = Field(default="docker")

from collections.abc import Iterator


class ContainerClientAdapter(abc.ABC):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we call it as BaseContainerClientAdapter(), similar to:

class ExecutionBackend(abc.ABC):

I would suggest, we move them to subdirectory:

container/adapters/base.py
container/adapters/docker.py
container/adapters/podman.py

WDYT @Fiona-Waters ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes good idea, will do.
BTW thank you for your review. Will update the PR tomorrow hopefully.

@Fiona-Waters
Copy link
Author

@andreyvelich I have addressed all of your comments, please review again when you can. I have removed the README.md and will add it along with docs on local execution to the user guides.
@astefanutti could you please review again.
Thank you both.

Signed-off-by: Fiona Waters <fiwaters6@gmail.com>
"""
Create per-job working directory on host.
Working directories are created under ~/.kubeflow_trainer/localcontainer/<job_name>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe ~/.kubeflow/trainer/containers/... ?


logger = logging.getLogger(__name__)

CONTAINER_RUNTIMES_DIR = Path(__file__).parents[1] / "config" / "container_runtimes"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I've suggested "container_runtimes" before, but looking at it maybe training_runtimes would be more appropriate?

logger = logging.getLogger(__name__)

CONTAINER_RUNTIMES_DIR = Path(__file__).parents[1] / "config" / "container_runtimes"
CACHE_DIR = Path.home() / ".kubeflow_trainer" / "runtime_cache"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe ~/.kubeflow/trainer/cache?


# GitHub runtimes configuration
GITHUB_RUNTIMES_BASE_URL = (
"https://raw.githubusercontent.com/kubeflow/trainer/master/manifests/base/runtimes"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note for later, we should probably find a way rely on released runtimes.

@@ -0,0 +1,417 @@
# Copyright 2025 The Kubeflow Authors.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

training_runtime_loader.py?

gpus: Optional[Union[int, bool]] = Field(default=None)
container_host: Optional[str] = Field(default=None)
container_runtime: Optional[Literal["docker", "podman"]] = Field(default=None)
use_github_runtimes: bool = Field(default=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can be more structured here and have a training_runtimes argument, that could have different options, like one to some URLs maybe.

"""

from kubeflow.trainer.backends.container_runtime_loader import (
CONTAINER_RUNTIMES_DIR,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
CONTAINER_RUNTIMES_DIR,
TRAINING_RUNTIMES_DIR,


from kubeflow.trainer.backends.container_runtime_loader import (
CONTAINER_RUNTIMES_DIR,
get_container_runtime,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
get_container_runtime,
get_training_runtime,

from kubeflow.trainer.backends.container_runtime_loader import (
CONTAINER_RUNTIMES_DIR,
get_container_runtime,
list_container_runtimes,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
list_container_runtimes,
list_training_runtimes,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants