Skip to content

[Bot] Update inference types #3104

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 15 additions & 7 deletions docs/source/en/package_reference/inference_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,18 +57,14 @@ This part of the lib is still under development and will be improved in future r

[[autodoc]] huggingface_hub.ChatCompletionInputFunctionName

[[autodoc]] huggingface_hub.ChatCompletionInputJSONSchema
[[autodoc]] huggingface_hub.ChatCompletionInputGrammarType

[[autodoc]] huggingface_hub.ChatCompletionInputJSONSchemaConfig

[[autodoc]] huggingface_hub.ChatCompletionInputMessage

[[autodoc]] huggingface_hub.ChatCompletionInputMessageChunk

[[autodoc]] huggingface_hub.ChatCompletionInputResponseFormatJSONObject

[[autodoc]] huggingface_hub.ChatCompletionInputResponseFormatJSONSchema

[[autodoc]] huggingface_hub.ChatCompletionInputResponseFormatText

[[autodoc]] huggingface_hub.ChatCompletionInputStreamOptions

[[autodoc]] huggingface_hub.ChatCompletionInputTool
Expand Down Expand Up @@ -197,6 +193,18 @@ This part of the lib is still under development and will be improved in future r



## image_to_video

[[autodoc]] huggingface_hub.ImageToVideoInput

[[autodoc]] huggingface_hub.ImageToVideoOutput

[[autodoc]] huggingface_hub.ImageToVideoParameters

[[autodoc]] huggingface_hub.ImageToVideoTargetSize



## object_detection

[[autodoc]] huggingface_hub.ObjectDetectionBoundingBox
Expand Down
22 changes: 15 additions & 7 deletions docs/source/ko/package_reference/inference_types.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,18 +56,14 @@ rendered properly in your Markdown viewer.

[[autodoc]] huggingface_hub.ChatCompletionInputFunctionName

[[autodoc]] huggingface_hub.ChatCompletionInputJSONSchema
[[autodoc]] huggingface_hub.ChatCompletionInputGrammarType

[[autodoc]] huggingface_hub.ChatCompletionInputJSONSchemaConfig

[[autodoc]] huggingface_hub.ChatCompletionInputMessage

[[autodoc]] huggingface_hub.ChatCompletionInputMessageChunk

[[autodoc]] huggingface_hub.ChatCompletionInputResponseFormatJSONObject

[[autodoc]] huggingface_hub.ChatCompletionInputResponseFormatJSONSchema

[[autodoc]] huggingface_hub.ChatCompletionInputResponseFormatText

[[autodoc]] huggingface_hub.ChatCompletionInputStreamOptions

[[autodoc]] huggingface_hub.ChatCompletionInputTool
Expand Down Expand Up @@ -196,6 +192,18 @@ rendered properly in your Markdown viewer.



## image_to_video[[huggingface_hub.ImageToVideoInput]]

[[autodoc]] huggingface_hub.ImageToVideoInput

[[autodoc]] huggingface_hub.ImageToVideoOutput

[[autodoc]] huggingface_hub.ImageToVideoParameters

[[autodoc]] huggingface_hub.ImageToVideoTargetSize



## object_detection[[huggingface_hub.ObjectDetectionBoundingBox]]

[[autodoc]] huggingface_hub.ObjectDetectionBoundingBox
Expand Down
7 changes: 3 additions & 4 deletions src/huggingface_hub/inference/_generated/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,11 @@
ChatCompletionInputFunctionDefinition,
ChatCompletionInputFunctionName,
ChatCompletionInputGrammarType,
ChatCompletionInputJSONSchema,
ChatCompletionInputGrammarTypeType,
ChatCompletionInputJSONSchemaConfig,
ChatCompletionInputMessage,
ChatCompletionInputMessageChunk,
ChatCompletionInputMessageChunkType,
ChatCompletionInputResponseFormatJSONObject,
ChatCompletionInputResponseFormatJSONSchema,
ChatCompletionInputResponseFormatText,
ChatCompletionInputStreamOptions,
ChatCompletionInputTool,
ChatCompletionInputToolCall,
Expand Down Expand Up @@ -85,6 +83,7 @@
ImageToTextOutput,
ImageToTextParameters,
)
from .image_to_video import ImageToVideoInput, ImageToVideoOutput, ImageToVideoParameters, ImageToVideoTargetSize
from .object_detection import (
ObjectDetectionBoundingBox,
ObjectDetectionInput,
Expand Down
48 changes: 14 additions & 34 deletions src/huggingface_hub/inference/_generated/types/chat_completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ class ChatCompletionInputMessageChunk(BaseInferenceType):
@dataclass_with_extra
class ChatCompletionInputFunctionDefinition(BaseInferenceType):
name: str
parameters: Any
description: Optional[str] = None
parameters: Any


@dataclass_with_extra
Expand All @@ -46,50 +46,30 @@ class ChatCompletionInputMessage(BaseInferenceType):


@dataclass_with_extra
class ChatCompletionInputJSONSchema(BaseInferenceType):
class ChatCompletionInputJSONSchemaConfig(BaseInferenceType):
name: str
"""
The name of the response format.
"""
"""The name of the response format."""
description: Optional[str] = None
"""A description of what the response format is for, used by the model to determine how to
respond in the format.
"""
A description of what the response format is for, used by the model to determine
how to respond in the format.
"""
schema: Optional[Dict[str, object]] = None
"""
The schema for the response format, described as a JSON Schema object. Learn how
to build JSON schemas [here](https://json-schema.org/).
schema: Optional[Dict[str, Any]] = None
"""The schema for the response format, described as a JSON Schema object. Learn how to build
JSON schemas [here](https://json-schema.org/).
"""
strict: Optional[bool] = None
"""
Whether to enable strict schema adherence when generating the output. If set to
true, the model will always follow the exact schema defined in the `schema`
field.
"""Whether to enable strict schema adherence when generating the output. If set to true, the
model will always follow the exact schema defined in the `schema` field.
"""


@dataclass_with_extra
class ChatCompletionInputResponseFormatText(BaseInferenceType):
type: Literal["text"]


@dataclass_with_extra
class ChatCompletionInputResponseFormatJSONSchema(BaseInferenceType):
type: Literal["json_schema"]
json_schema: ChatCompletionInputJSONSchema
ChatCompletionInputGrammarTypeType = Literal["text", "json_schema", "json_object"]


@dataclass_with_extra
class ChatCompletionInputResponseFormatJSONObject(BaseInferenceType):
type: Literal["json_object"]


ChatCompletionInputGrammarType = Union[
ChatCompletionInputResponseFormatText,
ChatCompletionInputResponseFormatJSONSchema,
ChatCompletionInputResponseFormatJSONObject,
]
class ChatCompletionInputGrammarType(BaseInferenceType):
type: "ChatCompletionInputGrammarTypeType"
json_schema: Optional[ChatCompletionInputJSONSchemaConfig] = None


@dataclass_with_extra
Expand Down
60 changes: 60 additions & 0 deletions src/huggingface_hub/inference/_generated/types/image_to_video.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Inference code generated from the JSON schema spec in @huggingface/tasks.
#
# See:
# - script: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-codegen.ts
# - specs: https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks.
from typing import Any, Optional

from .base import BaseInferenceType, dataclass_with_extra


@dataclass_with_extra
class ImageToVideoTargetSize(BaseInferenceType):
"""The size in pixel of the output video frames."""

height: int
width: int


@dataclass_with_extra
class ImageToVideoParameters(BaseInferenceType):
"""Additional inference parameters for Image To Video"""

guidance_scale: Optional[float] = None
"""For diffusion models. A higher guidance scale value encourages the model to generate
videos closely linked to the text prompt at the expense of lower image quality.
"""
negative_prompt: Optional[str] = None
"""One prompt to guide what NOT to include in video generation."""
num_frames: Optional[float] = None
"""The num_frames parameter determines how many video frames are generated."""
num_inference_steps: Optional[int] = None
"""The number of denoising steps. More denoising steps usually lead to a higher quality
video at the expense of slower inference.
"""
prompt: Optional[str] = None
"""The text prompt to guide the video generation."""
seed: Optional[int] = None
"""Seed for the random number generator."""
target_size: Optional[ImageToVideoTargetSize] = None
"""The size in pixel of the output video frames."""


@dataclass_with_extra
class ImageToVideoInput(BaseInferenceType):
"""Inputs for Image To Video inference"""

inputs: str
"""The input image data as a base64-encoded string. If no `parameters` are provided, you can
also provide the image data as a raw bytes payload.
"""
parameters: Optional[ImageToVideoParameters] = None
"""Additional inference parameters for Image To Video"""


@dataclass_with_extra
class ImageToVideoOutput(BaseInferenceType):
"""Outputs of inference for the Image To Video task"""

video: Any
"""The generated video returned as raw bytes in the payload."""
Loading