Skip to content

Design proposal for supported image formats #6308

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 11, 2025

Conversation

gthvn1
Copy link
Contributor

@gthvn1 gthvn1 commented Feb 17, 2025

Proposing a new field field to get information about supported image formats for a given SR. This information is retrieved from SMAPIv1 plugins.

@psafont
Copy link
Member

psafont commented Feb 17, 2025

Thanks! I'm not an expert here, so I ahve a few questions:

Is there a default value for the field, or do all backend need to declare the image formats in the new field?

Are there plans to start using this field from within xapi (to select the image format for certain operations), or is this only going to be informational for the user? If there are, how do these look like? I would muchb rather have a more complete, holistic design proposal here rather than have many designs building that build on top of each other, as it's easier to see design holes


# Design Proposal

To expose the available image formats to clients (e.g., the `xe` CLI), we propose
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The wording is a bit surprising; xe is not a typical API client; XenCenter or Python code are clients. The proposal here is to extend the SM API object (class) with a new field. The main consideration will have to be upgrades - how is this field populated during a version upgrade? So this should be discussed in API and not XE primarily.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@MarkSymsCtx MarkSymsCtx Feb 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MarkSymsCtx

Yep, thanks I saw it. This I think only relates to the xcp-ng fork of SM/blktap that implements qcow2 support.

Copy link
Contributor

@stormi stormi Feb 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no will to make it a permanent fork though. We moved on quickly on this topic out of necessity, with all the SMAPIv3 drivers being proprietary (such as the XFS one). Upstream first remains our approach whenever doable (hence this design proposal which is a first step and raises valid questions regarding upgrade and how it can affect XenServer). However, the code changes are big enough that they'll require both careful testing on our side first and proper preparation of the upstream contribution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about xe CLI because I have this architecture in mind with xe cli presented as client. But Yes I will replace it with XenOrchestra, OpenStack, XenCenter, ...
Correct, the proposal is to extend the SM API object with a new field. The default value (I will add it in the proposition) will be an empty array. As this field is informative it is up to the storage driver to provide this information. If it doesn't provide any information the field will be empty. So here the purpose is for our usage and qcow2 but from the XAPI point of view it can be any information. For example of someone adds a driver with VHDx support into a SR it can be exposed using this field.

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 17, 2025

Thanks! I'm not an expert here, so I ahve a few questions:

Is there a default value for the field, or do all backend need to declare the image formats in the new field?

Yes the default value is just nothing (empty array). So if your storage driver does not provide any information (typically if you are using the current drivers without any modifications) the field will be empty.

Are there plans to start using this field from within xapi (to select the image format for certain operations), or is this only going to be informational for the user?

Currently there is no plan other than being informational for the user.

@stormi
Copy link
Contributor

stormi commented Feb 17, 2025

Currently there is no plan other than being informational for the user.

This is true of the xe output.

But the XAPI field will be very useful for XO to know what formats to offer to users, since in this model SRs can support both VHD and QCOW2.

I agree with @psafont here: an explanation of the overall design would be good.

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 17, 2025

Currently there is no plan other than being informational for the user.

This is true of the xe output.

But the XAPI field will be very useful for XO to know what formats to offer to users, since in this model SRs can support both VHD and QCOW2.

Sorry I misunderstood the question from Pau. I said no plan to be used internally by XAPI but yes it is usefull for clients. Thanks @stormi for the clarification.

@stormi
Copy link
Contributor

stormi commented Feb 17, 2025

I'll give you my understanding of the current design. I don't think there's a written version at the moment, and I think we definitely should write one, which covers all of XAPI, sm, qemu, and any other component involved. Ping @Wescoeur and @AnthoineB.

The main idea is to make current SRs compatible with a new format, QCOW2, to go beyond the current 2Tb limit. This was decided after a study by @AnthoineB who deemed it the shortest maintainable path towards 2Tb+ support. SMAPIv3 was considered but not deemed ready yet, and difficulties working together with XenServer on this specific topic also weighted in the decision. But it's not abandoned either for future work on the storage stack. Adding QCOW2 support to existing storage drivers also has the perk of offering a smooth transition for existing users, but requires that we can know which SRs support what formats from XAPI, hence @gthvn1's proposition here.

The overall changes involve (I let the experts correct me when I'm wrong):

  • Adding qcow2 support to tapdisk
  • Adding qcow2 support to storage drivers in sm
  • Letting XAPI know about qcow2 (I don't know the details, that @gthvn1 knows better)
  • Exposing supported formats via XAPI

Current state is alpha, with users already providing feedback on the forum: https://xcp-ng.org/forum/topic/10308/dedicated-thread-removing-the-2tib-limit-with-qcow2-volumes

@psafont
Copy link
Member

psafont commented Feb 17, 2025

Adding QCOW2 support to existing storage drivers also has the perk of offering a smooth transition for existing users, but requires that we can know which SRs support what formats from XAPI

How is the request to use one format or the other threaded through added to existing operations? Does xapi already support this?

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 17, 2025

How is the request to use one format or the other threaded through added to existing operations? Does xapi already support this?

So today in our current alpha release it is qcow2 by default and you cannot change. But in the final version that should arrive quickly we will choose the type of the VDI that we want to create by setting the sm-config during the creation of the VDI. So you will be able to create the VDI using:

# xe vdi-create ... sm-config=image-format=qcow2 ...
# xe vdi-create ... sm-config=image-format=vhd ...

This option is managed directly by the SMAPI plugin. So we won't have any modification in the XAPI (AFAIU) to support it. And if you pass a wrong format (that is not supported by the SR) you will have an error returned by the SMAPI plugin. So if the XenCenter or the XenOrchestra already have the information that only Qcow is supported by this kind of SR or only VHD is supported it will be able to know what can be done.

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 17, 2025

In fact, what we identified as missing in XAPI is the ability to migrate a VDI. For VDI migration, XAPI currently use vhd-tool, so we will probably need an equivalent for the QCOW format. A tool like qcow-tool is likely a good starting point. It does not yet support streaming data, but it looks like a solid foundation on which to build that feature. We plan to work on it soon, and we will create a new design proposal for it, I believe. So it is not directly related to this design but it shows you the big picture. I can add a comment on that in the current design proposal but I'm not sure if it makes sense...

@psafont
Copy link
Member

psafont commented Feb 17, 2025

So today in our current alpha release it is qcow2 by default and you cannot change. But in the final version that should arrive quickly we will choose the type of the VDI that we want to create by setting the sm-config during the creation of the VDI.

Thanks, this makes sense.

In fact, what we identified as missing in XAPI is the ability to migrate a VDI.

How is the new format for storage decided? Can the value still be passed to the driver without xapi involvement? I know at the very least that Sparse_dd (in xen-api.git) needs to be aware of the new format, as it facilitates the migration, but I don't know how it's being told about the format of the destination.

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 17, 2025

How is the new format for storage decided? Can the value still be passed to the driver without xapi involvement? I know at the very least that Sparse_dd (in xen-api.git) needs to be aware of the new format, as it facilitates the migration, but I don't know how it's being told about the format of the destination.

I'm not 100% sure yet but at least I tested a VDI copy. So I have a VM with a VDI. This VDI is on a storage with qcow support. The VM is halted. Currently when I'm doing a VDI copy to the same SR I observed that two tap process are created:

# tap-ctl list
   38615    0    0      qcow2 /var/run/sr-mount/6e13a0c0-114b-7591-2d87-98725060f8fe/20c15218-c477-4bbe-99e9-5b884ee85fda.qcow2
   38729    1    0      qcow2 /var/run/sr-mount/6e13a0c0-114b-7591-2d87-98725060f8fe/f7e2ddff-7242-476b-a4f1-87b8d0918034.qcow2

And the sparse_dd process is using the block device created on /dev/sm/backend/... that is the tap device. I see:

Feb 17 16:51:21 xcp-gtn-ip12 xapi: [debug||5635 /var/lib/xcp/xapi|VDI.copy R:78a46907e958|sparse_dd_wrapper] /usr/libexec/xapi/sparse_dd -machine -src /dev/sm/backend/6e13a0c0-114b-7591-2d87-98725060f8fe/20c15218-c477-4bbe-99e9-5b884ee85fda -dest /dev/sm/backend/6e13a0c0-
114b-7591-2d87-98725060f8fe/f7e2ddff-7242-476b-a4f1-87b8d0918034 -size 2147483648 -good-ciphersuites ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256 -prezeroed

So currently it looks like VDI copy is working out of the box. And in fact if you are exporting a VDI to a raw file it is also working because vhd-tool can use raw format and use /dev/sm/backend/... as the source. So the conversion to/from qcow is done by the tap process.

But of course currently you cannot export/import the VDI to/from a qcow file (and that's the part that is missing and that requires to integrate/improve the qcow-tool). Also I think that live migration is not working but I'm not sure. I'm currently testing.

Sorry if it's not very clear yet, and I hope my examples help with understanding.

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 17, 2025

How is the new format for storage decided?

Oh I misunderstood your question ;). Currently we are expecting to use the same format when doing the migration but indeed it could be a nice feature to be able to choose a format for the destination... Not sure about how to do it right now but we need to think about it 👍

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 20, 2025

I have reworded the design for better readability (I hope) and correctness.

@psafont
Copy link
Member

psafont commented Feb 28, 2025

Friendly reminder that a way to decide the format in storage migrations is needed here. (although what's present is a good start)

@gthvn1
Copy link
Contributor Author

gthvn1 commented Feb 28, 2025

Oh Yes sure I will add it.
Edit: I would like it to be transparent for the XAPI in this first version (by keeping the same format for example) even if I think that the ideal would be to add the possibility of choosing the type of the destination....

Proposing a new field field to get information about supported image formats
for a given SR. This information is retrieved from SMAPIv1 plugins.

Signed-off-by: Guillaume <guillaume.thouvenin@vates.tech>
@gthvn1 gthvn1 force-pushed the feature/supported-image-formats branch from 39dbd90 to 8bb3d19 Compare March 3, 2025 16:04
Copy link
Member

@psafont psafont left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it's missing a way to pick the format when migrating, this is a good start/ We can make more revisions later on.

@gthvn1
Copy link
Contributor Author

gthvn1 commented Mar 11, 2025

I added a "hint" (I agree that it is a first version) about the way to pick the format when migrating. The idea is that the list is ordered and the first format listed is the preferred format of the storage driver. So when you migrate this is the format that will be used and an error will occur if we cannot use it. For example migrating from qcow with 4TB to vhd. But yes I agree that a second version when we have more hindsight will be needed probably.

@lindig lindig added this pull request to the merge queue Mar 11, 2025
of strings representing the supported image formats
(for example: `["vhd", "raw", "qcow2"]`).

The list designates the driver's preferred VDI format as its first entry. That
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, is there a way for the user to specify his/her preference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No not with this field. I think that if a user want to specify his/her preference it will be done by using for example a new parameter in vdi pool migrate. So it is not in the scope of this proposition. The scope of this proposition is just how to expose the format supported by a storage driver.

we propose adding a new field called `supported-image-formats` to the Storage Manager (SM)
module. This field will be included in the output of the `SM.get_all_records` call.

The `supported-image-formats` field will be populated by retrieving information
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So just to check that I understand this field correctly, if a user wants to use qcow2 in a, say, NFS SR, all the VDIs created on that SR will now be qcow2? Will this affect the way the SMAPI drivers handle calls? For example, I imagine VDI.compose would work differently on VHD and QCOW2? Will this be handled by the forked SM?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes in the first version it is working like that. But it is not the scope of this proposition. Here the goal is just to expose supported format nothing more in reality. But you are right and behind that we are implementing the qcow2 support. So to answer we do that because when you enable the qcow2 support you will use qcow2 format on your SR (NFS is a good example). And the point is that you can still use your former VHD disks. So the transition is kind of transparent. And yes the SMAPI driver will be able to handle both format. I let @Wescoeur , @AnthoineB and @Nambrok answered to add details or correct me about this :)

The list designates the driver's preferred VDI format as its first entry. That
means that when migrating a VDI, the destination storage repository will
attempt to create a VDI in this preferred format. If the default format cannot
be used (e.g., due to size limitations), an error will be generated.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally this needs to be part of the precheck in migration, but I think the idea here is fine

Merged via the queue into xapi-project:master with commit b9c8154 Mar 11, 2025
15 checks passed

To expose the available image formats to clients (e.g., XenCenter, XenOrchestra, etc.),
we propose adding a new field called `supported-image-formats` to the Storage Manager (SM)
module. This field will be included in the output of the `SM.get_all_records` call.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The datamodel field must have underscores, so supported_image_formats, while the CLI normally uses hyphens.

of strings representing the supported image formats
(for example: `["vhd", "raw", "qcow2"]`).

The list designates the driver's preferred VDI format as its first entry. That
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The list" here refers to what is returned from the SMAPI. It looks like the intention in to put this straight into a xapi DB field, such that it can be queried through the XenAPI. However, the xapi datamodel uses sets rather than lists, so you cannot rely on a particular order. If you want to mark a particular format as preferred, you could add a separate field for that (e.g. SM.preferred_image_format).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I think we will have a version 2 of the proposition ;)

At XCP-ng, we are enhancing support for QCOW2 images in SMAPI. The primary
motivation for this change is to overcome the 2TB size limitation imposed
by the VHD format. By adding support for QCOW2, a Storage Repository (SR) will
be able to host disks in VHD and/or QCOW2 formats, depending on the SR type.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we then need a field on the SR as well to indicate, which format is used? I assume that when creating an SR, you'll want to be able to choose a format? You could then, for example, create an "NFS QCOW2 SR" or an "NFS VHD SR". Or would a single SR be able to contain a mix of VDIs using different supported types, and if so, how would that work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A single SR will be able to contain a mix of VDIs. In our first version (it is an alpha version: there is a default format. So you have your current SR that uses VHD, then you install the package with qcow2 support. And now by default you will create qcow2 file. So you will still be able to use the VHD disks and at the same time you are now creating qcow2 file. In the future (but it was not the scope of my proposition but maybe it has to be) we will be able to choose the format when creating a VDI. So in the future (but it can be discussed here no worries) we will need to add a parametre to choose the kind of format when the VDI is created and the destination format in case of migration... It was not the scope of the version1 of this proposition but the more we discussed and the more it makes sense to add this part too.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A single SR can have multiple types of image format, like it can have RAW and VHD at the moment, it will just be QCOW2, RAW and VHD. With one preferred for default when not specifying which one to create with VDI creation.
For example: xe vdi-create sr-uuid=3ef2825a-f700-4224-ce72-91174581acc7 type=user sm-config:type=qcow2 virtual-size=1GiB name-label="Test QCOW2"


The list designates the driver's preferred VDI format as its first entry. That
means that when migrating a VDI, the destination storage repository will
attempt to create a VDI in this preferred format. If the default format cannot
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the intention to automatically change the format of a VDI to the "preferred" format when doing a storage migration? Is there no choice or desire to keep the current format of the VDI?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently the "preferred" format is imposed by the storage driver. So we cannot change it. But yes in the next release it makes sense to at least be able to either keep the same format or use the qcow2 format.
So the current design does not take this into account but I think that effectively adding two fields for the destination format during the migration as well as a field for the format used when creating the VDI seems important finally.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just because of how migration work at the moment, creating a new VDI (with the default preferred format) and streaming the data of the old VDI in the new one.

@gthvn1 gthvn1 deleted the feature/supported-image-formats branch March 17, 2025 10:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants