-
Notifications
You must be signed in to change notification settings - Fork 543
Open
Labels
bugSomething isn't workingSomething isn't workingstatus: needs triageNew issues that have not yet been reviewed or categorized.New issues that have not yet been reviewed or categorized.
Description
Did you check docs and existing issues?
- I have read all the NeMo-Guardrails docs
- I have updated the package to the latest version before submitting this issue
- (optional) I have used the develop branch
- I have searched the existing issues of NeMo-Guardrails
Python version (python --version)
Python 3.13.2
Operating system/version
MacOS 15.6.1
NeMo-Guardrails version (if you must use a specific version and not the latest
0.16.0
Describe the bug
The state_to_json()
function in colang/v2_x/runtime/serialization.py does not correctly serialize dataclasses. This function has no coverage in unit-tests, and this line shadowed the bug by always passing None
into the state_to_json()
function. Parsing a None is effectively a no-op for the state_to_json()
function.
Once the text = result.text
line was removed from nemoguardrails/actions/v2_x/generation.py, this now gives an error with the stacktrace below. We need to fix this, but it isn't necessary to merge the Type fixes. This Issue is tracked in the unittest.skip(reason=...)
field
tests/v2_x/test_passthroug_mode.py:83 (TestPassthroughLLMActionLogging.test_passthrough_llm_action_invoked_via_logs)
self = <tests.v2_x.test_passthroug_mode.TestPassthroughLLMActionLogging testMethod=test_passthrough_llm_action_invoked_via_logs>
def test_passthrough_llm_action_invoked_via_logs(self):
chat = TestChat(
config,
llm_completions=["user asked about capabilites", "a random text from llm"],
)
rails = chat.app
logger = logging.getLogger("nemoguardrails.colang.v2_x.runtime.statemachine")
with self.assertLogs(logger, level="INFO") as log:
messages = [{"role": "user", "content": "What can you do?"}]
> response = rails.generate(messages=messages)
v2_x/test_passthroug_mode.py:95:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../nemoguardrails/rails/llm/llmrails.py:1324: in generate
return loop.run_until_complete(
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py:725: in run_until_complete
return future.result()
../nemoguardrails/rails/llm/llmrails.py:964: in generate_async
output_state = {"state": state_to_json(output_state), "version": "2.x"}
../nemoguardrails/colang/v2_x/runtime/serialization.py:217: in state_to_json
result = json.dumps(d, indent=indent)
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/__init__.py:238: in dumps
**kw).encode(obj)
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py:200: in encode
chunks = self.iterencode(o, _one_shot=True)
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py:261: in iterencode
return _iterencode(o, 0)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <json.encoder.JSONEncoder object at 0x169872950>
o = ParsedTaskOutput(text='a random text from llm', reasoning_trace=None)
def default(self, o):
"""Implement this method in a subclass such that it returns
a serializable object for ``o``, or calls the base implementation
(to raise a ``TypeError``).
For example, to support arbitrary iterators, you could
implement default like this::
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
# Let the base class default method raise the TypeError
return super().default(o)
"""
> raise TypeError(f'Object of type {o.__class__.__name__} '
f'is not JSON serializable')
E TypeError: Object of type ParsedTaskOutput is not JSON serializable
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py:180: TypeError
v2_x/test_story_mechanics.py::test_when_else_deep_hierarchy_case_match
v2_x/test_story_mechanics.py::test_when_conflict_issue
v2_x/test_story_mechanics.py::test_flow_event_competition
[gw3] [ 94%] PASSED test_streaming.py::test_main_llm_supports_streaming_flag_config_combinations[chat-False-False-False]
v2_x/test_story_mechanics.py::test_flow_bot_question_repetition
Steps To Reproduce
- Remove @unittest.skip () decorator from
test_passthrough_llm_action_invoked_via_logs()
intests/v2_x/test_passthroug_mode.py
- Run
poetry run pytest tests/v2_x/test_passthroug_mode.py
Expected Behavior
Test Passes
Actual Behavior
Test fails with stacktrace
tests/v2_x/test_passthroug_mode.py:83 (TestPassthroughLLMActionLogging.test_passthrough_llm_action_invoked_via_logs)
self = <tests.v2_x.test_passthroug_mode.TestPassthroughLLMActionLogging testMethod=test_passthrough_llm_action_invoked_via_logs>
def test_passthrough_llm_action_invoked_via_logs(self):
chat = TestChat(
config,
llm_completions=["user asked about capabilites", "a random text from llm"],
)
rails = chat.app
logger = logging.getLogger("nemoguardrails.colang.v2_x.runtime.statemachine")
with self.assertLogs(logger, level="INFO") as log:
messages = [{"role": "user", "content": "What can you do?"}]
> response = rails.generate(messages=messages)
v2_x/test_passthroug_mode.py:95:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../nemoguardrails/rails/llm/llmrails.py:1324: in generate
return loop.run_until_complete(
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py:725: in run_until_complete
return future.result()
../nemoguardrails/rails/llm/llmrails.py:964: in generate_async
output_state = {"state": state_to_json(output_state), "version": "2.x"}
../nemoguardrails/colang/v2_x/runtime/serialization.py:217: in state_to_json
result = json.dumps(d, indent=indent)
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/__init__.py:238: in dumps
**kw).encode(obj)
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py:200: in encode
chunks = self.iterencode(o, _one_shot=True)
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py:261: in iterencode
return _iterencode(o, 0)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <json.encoder.JSONEncoder object at 0x169872950>
o = ParsedTaskOutput(text='a random text from llm', reasoning_trace=None)
def default(self, o):
"""Implement this method in a subclass such that it returns
a serializable object for ``o``, or calls the base implementation
(to raise a ``TypeError``).
For example, to support arbitrary iterators, you could
implement default like this::
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
# Let the base class default method raise the TypeError
return super().default(o)
"""
> raise TypeError(f'Object of type {o.__class__.__name__} '
f'is not JSON serializable')
E TypeError: Object of type ParsedTaskOutput is not JSON serializable
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py:180: TypeError
v2_x/test_story_mechanics.py::test_when_else_deep_hierarchy_case_match
v2_x/test_story_mechanics.py::test_when_conflict_issue
v2_x/test_story_mechanics.py::test_flow_event_competition
[gw3] [ 94%] PASSED test_streaming.py::test_main_llm_supports_streaming_flag_config_combinations[chat-False-False-False]
v2_x/test_story_mechanics.py::test_flow_bot_question_repetition
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstatus: needs triageNew issues that have not yet been reviewed or categorized.New issues that have not yet been reviewed or categorized.