Skip to content

prepare for python 3.12 #190

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Installation
The SWAT package is installed using the ``pip`` command. The requirements
for using the binary protocol of CAS (recommended) are as follows.

* **64-bit** Python 3.7 - 3.11 on Linux or Windows
* **64-bit** Python 3.7 - 3.12 on Linux or Windows

See additional shared library notes below.

Expand All @@ -19,7 +19,7 @@ amounts of data. It also offers more advanced data loading from the client
and data formatting features.

To access the CAS REST interface only, you can use the pure Python code which
runs in Python 2.7/3.5+. You will still need Pandas installed. While not as
runs in Python 3.7 - 3.12. You will still need Pandas installed. While not as
fast as the binary protocol, the pure Python interface is more portable.
For more information, see :ref:`Binary vs. REST <binaryvsrest>`.

Expand Down
3 changes: 2 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def get_file(fname):
license='Apache v2.0 (SWAT) + SAS Additional Functionality (SAS TK)',
packages=find_packages(),
package_data={
'swat': ['lib/*/*.*', 'tests/datasources/*.*'],
'swat': ['lib/*/*.*', 'tests/datasources/*.*', 'readme.md'],
},
install_requires=[
'pandas >= 0.16.0',
Expand All @@ -68,6 +68,7 @@ def get_file(fname):
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3.12',
'Topic :: Scientific/Engineering',
],
)
13 changes: 13 additions & 0 deletions swat/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
For **Python 3.12 on Windows only**, the following modification was made to the `pyport.h` file while building the SWAT C extensions:

* Updated the `#define` for `ALWAYS_INLINE`
<br>**Previous Definition :**
```c
#elif defined(__GNUC__) || defined(__clang__) || defined(__INTEL_COMPILER)
```
**Updated Definition :**
```c
#elif defined(__GNUC__) || defined(__clang__) || defined(__INTEL_LLVM_COMPILER) || (defined(__INTEL_COMPILER) && !defined(_WIN32))
```

This change addresses a compiler error encountered when using the Intel compiler on Windows.
22 changes: 11 additions & 11 deletions swat/tests/cas/test_table.py
Original file line number Diff line number Diff line change
Expand Up @@ -878,10 +878,10 @@ def test_drop_duplicates(self):
df_dropped = df.drop_duplicates(subset='Make')

# Equivalent to pandas in size
self.assertEquals(len(tbl_dropped), len(df_dropped))
self.assertEqual(len(tbl_dropped), len(df_dropped))
# Number of elements in 'Make' column should be same as number of unique elements
self.assertEquals(tbl_dropped['Make'].nunique(), len(tbl_dropped['Make']))
self.assertEquals(tbl_dropped['Make'].nunique(), len(tbl_dropped))
self.assertEqual(tbl_dropped['Make'].nunique(), len(tbl_dropped['Make']))
self.assertEqual(tbl_dropped['Make'].nunique(), len(tbl_dropped))

# drop duplicates for multi-element subset
tbl_dropped_multi = tbl.drop_duplicates(casout={'replace': True,
Expand All @@ -890,7 +890,7 @@ def test_drop_duplicates(self):
df_dropped_multi = df.drop_duplicates(subset=['Origin', 'Type'])

# Equivalent to pandas in size
self.assertEquals(len(tbl_dropped_multi), len(df_dropped_multi))
self.assertEqual(len(tbl_dropped_multi), len(df_dropped_multi))

# We need some rows where all values for each col are duplicate
nDuplicates = 7
Expand All @@ -915,8 +915,8 @@ def test_drop_duplicates(self):
'name': 'drop-test-4'})

# Make sure that the correct amount of rows were dropped
self.assertEquals(len(tbl), len(tbl_dropped_all))
self.assertEquals(len(duplicate_table), len(tbl_dropped_all) + nDuplicates)
self.assertEqual(len(tbl), len(tbl_dropped_all))
self.assertEqual(len(duplicate_table), len(tbl_dropped_all) + nDuplicates)

def test_column_iter(self):
df = self.get_cars_df()
Expand Down Expand Up @@ -3314,23 +3314,23 @@ def test_nunique(self):
tbl_nunique = tbl.nunique()
df_nunique = df.nunique()
# Length of Series are equal
self.assertEquals(len(tbl_nunique), len(df_nunique))
self.assertEqual(len(tbl_nunique), len(df_nunique))
# Indices are equal
self.assertTrue(sorted(tbl_nunique) == sorted(df_nunique))
# Values are equal
for col in tbl.columns:
self.assertEquals(tbl_nunique[col], df_nunique[col])
self.assertEqual(tbl_nunique[col], df_nunique[col])

# Now counting NaN
tbl_nunique_nan = tbl.nunique(dropna=False)
df_nunique_nan = df.nunique(dropna=False)
# Length of Series are equal
self.assertEquals(len(tbl_nunique_nan), len(df_nunique_nan))
self.assertEqual(len(tbl_nunique_nan), len(df_nunique_nan))
# Indices are equal
self.assertEquals(sorted(tbl_nunique_nan), sorted(df_nunique_nan))
self.assertEqual(sorted(tbl_nunique_nan), sorted(df_nunique_nan))
# Values are equal
for col in tbl.columns:
self.assertEquals(tbl_nunique_nan[col], df_nunique_nan[col])
self.assertEqual(tbl_nunique_nan[col], df_nunique_nan[col])

def test_column_unique(self):
df = self.get_cars_df()
Expand Down
Loading