Replies: 1 comment 1 reply
-
I use a Mac with apple silicon so sadly I cant provide any guidance here. Hopefully someone else can! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I want to use param 'device:cuda' for faster training. can someone check my edited code.
`import sqlite3
import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from tqdm import tqdm
dataset = "dataset_2012-24"
con = sqlite3.connect("../../Data/dataset.sqlite")
data = pd.read_sql_query(
f"select * from "{dataset}"", con, index_col="index")
con.close()
margin = data['Home-Team-Win']
data.drop(['Score', 'Home-Team-Win', 'TEAM_NAME', 'Date', 'TEAM_NAME.1', 'Date.1', 'OU-Cover', 'OU'],
axis=1, inplace=True)
data = data.values
data = data.astype(float)
highest_acc = 0
best_model = None
for x in tqdm(range(300)):
x_train, x_test, y_train, y_test = train_test_split(
data, margin, test_size=.1)
if best_model is not None:
best_model.save_model(
f'../../Models/XGBoost_Models/XGBoost_{highest_acc}%_ML-4.json')`
my tensorflow does detect the gpu.
my coding is very limited understanding. hope someone can check. the code does run. i just wonder if the model result is good.
Beta Was this translation helpful? Give feedback.
All reactions