Skip to content

Comparing CNNs with fixed vs. learnable (KAN-based) activations for image classification across multiple datasets.

Notifications You must be signed in to change notification settings

DavianYang/kan-cnn-study

Repository files navigation

Description

This project investigates how Kolmogorov–Arnold Networks (KANs) compare to traditional convolutional neural networks (CNNs) with fixed activation functions in the context of image classification across multiple datasets.

Unlike standard CNNs that rely on predefined activations like ReLU or Sigmoid, KANs introduce learnable activation functions, potentially making networks more adaptable, generalizable, and efficient, even with fewer layers.

The research involves training and evaluating various CNN architectures (e.g., ResNet, LeNet, AlexNet, GoogLeNet) using both traditional and KAN-based activations. Key metrics include accuracy, training efficiency, and robustness to data variations and augmentation techniques.

The ultimate goal is to assess whether KAN-enhanced CNNs offer a superior alternative to conventional models that have long been dominant in image classification.

About

Comparing CNNs with fixed vs. learnable (KAN-based) activations for image classification across multiple datasets.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published