-
Notifications
You must be signed in to change notification settings - Fork 68
Open
Description
Hello,
Mesh/shape completion application is mentionned in the paper (see Figure 9). I was wondering how to apply this kind of feature and I would have three questions:
- Once the autoencoder has been trained, would it be possible to only use it for mesh completion? In other words, what are the advantages/strengths of the transformer compared to the autoencoder for this task since the output of both the autoencoder and transformer are the same i.e. a mesh? Can the autoencoder only perform this task?
- I have trained the autoencoder on my own meshes dataset. Now, if I want to use the transformer to perform mesh completion, do I need to train it for this specific task i.e. should I pass partial mesh with its associated complete mesh as ground truth during the training or should I just train the transformer "normally" i.e. passing the full meshes in all batches so that the model will learn by itself and be able to perform mesh completion during the inference only?
- I am not very familiar with transformer and tokens. My understanding is that the triangles (words) of the mesh (sentence) are first converted to tokens thanks to the autoencoder (is it correct?). During the inference, let's imagine we want to complete tables top given their table legs only, should I feed to the transformer a partial mesh (table legs) and call the generate function of the transformer with all the associated tokens of the input partial mesh or should I feed the complete mesh (legs and top) with only the first tokens corresponding to the table legs? In a nutshell, how should I construct the prompt for the shape completion task?
Thanks in advance for your clarifications!
Metadata
Metadata
Assignees
Labels
No labels