Skip to content

using sigllm with CPUs, not CUDA GPUs #35

@algomaschine

Description

@algomaschine

Gents, is this possible?

I was suggested to do like below

`import torch
torch.set_default_device('cpu')

Now import other libraries

from mlblocks import MLPipeline
pipeline_name = 'mistral_detector'
pipeline = MLPipeline(pipeline_name)`

but I got an error on this code
step = 5 context = pipeline.fit(**context, start_=step, output_=step) context.keys()

0%| | 0/1508 [00:00<?, ?it/s]
Exception caught producing MLBlock sigllm.primitives.forecasting.huggingface.HF#1
Traceback (most recent call last):
File "c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\mlblocks\mlpipeline.py", line 679, in produce_block
block_outputs = block.produce(**produce_args)
File "c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\mlblocks\mlblock.py", line 331, in produce
return getattr(self.instance, self.produce_method)(**produce_kwargs)
File "c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\sigllm\primitives\forecasting\huggingface.py", line 116, in forecast
tokenized_input = self.tokenizer([text], return_tensors='pt').to('cuda')
File "c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\transformers\tokenization_utils_base.py", line 819, in to
self.data = {k: v.to(device=device) if isinstance(v, torch.Tensor) else v for k, v in self.data.items()}
File "c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\transformers\tokenization_utils_base.py", line 819, in
self.data = {k: v.to(device=device) if isinstance(v, torch.Tensor) else v for k, v in self.data.items()}
File "c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\torch\utils_device.py", line 106, in torch_function
return func(*args, **kwargs)
File "c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\torch\cuda_init
.py", line 310, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

AssertionError Traceback (most recent call last)
Cell In[41], line 2
1 step = 5
----> 2 context = pipeline.fit(**context, start_=step, output_=step)
3 context.keys()

File c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\mlblocks\mlpipeline.py:805, in MLPipeline.fit(self, X, y, output_, start_, debug, **kwargs)
802 self._fit_block(block, block_name, context, debug_info)
804 if fit_pending or output_blocks:
--> 805 self._produce_block(
806 block, block_name, context, output_variables, outputs, debug_info)
808 # We already captured the output from this block
809 if block_name in output_blocks:

File c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\mlblocks\mlpipeline.py:679, in MLPipeline._produce_block(self, block, block_name, context, output_variables, outputs, debug_info)
677 memory_before = process.memory_info().rss
678 start = datetime.utcnow()
--> 679 block_outputs = block.produce(**produce_args)
680 elapsed = datetime.utcnow() - start
681 memory_after = process.memory_info().rss

File c:\Users\Administrator\anaconda3\envs\py310\lib\site-packages\mlblocks\mlblock.py:331, in MLBlock.produce(self, **kwargs)
329 produce_kwargs = self._get_method_kwargs(produce_kwargs, self.produce_args)
330 if self._class:
...
312 raise AssertionError(
313 "libcudart functions unavailable. It looks like you have a broken build?"
314 )

AssertionError: Torch not compiled with CUDA enabled
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

That's Windows Server, Python 3.10.16
Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions