Skip to content

Commit 39ee819

Browse files
committed
Updated on 2024-11-23
1 parent c139901 commit 39ee819

File tree

3 files changed

+22
-3
lines changed

3 files changed

+22
-3
lines changed

index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ <h1>Where?</h1>
7474
</p>
7575
<h1>When?</h1>
7676
<p>
77-
Last time this was edited was 2024-11-22 (YYYY/MM/DD).
77+
Last time this was edited was 2024-11-23 (YYYY/MM/DD).
7878
</p>
7979
<small><a href="misc.html">misc</a></small>
8080
</div>

papers/list.json

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,13 @@
11
[
2+
{
3+
"title": "SphereFed: Hyperspherical Federated Learning",
4+
"author": "Xin Dong et al",
5+
"year": "2022",
6+
"topic": "federated learning",
7+
"venue": "Arxiv",
8+
"description": "This paper presents a novel approach to addressing the non-i.i.d. (non-independent and identically distributed) data challenge in federated learning by introducing hyperspherical federated learning (SphereFed). The key insight is that instead of letting clients independently learn their classifiers, which leads to inconsistent learning targets across clients, they should share a fixed classifier whose weights span a unit hypersphere, ensuring all clients work toward the same learning objectives. The approach normalizes features to project them onto this same hypersphere and uses mean squared error loss instead of cross-entropy to avoid scaling issues that arise when working with normalized features. Finally, after federated training is complete, they propose a computationally efficient way to calibrate the classifier using a closed-form solution that can be computed in a distributed manner without requiring direct access to private client data.",
9+
"link": "https://arxiv.org/pdf/2207.09413"
10+
},
211
{
312
"title": "A deeper look at depth pruning of LLMs",
413
"author": "Shoaib Ahmed Siddiqui et al",

papers_read.html

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,10 +75,10 @@ <h1>Here's where I keep a list of papers I have read.</h1>
7575
I typically use this to organize papers I found interesting. Please feel free to do whatever you want with it. Note that this is not every single paper I have ever read, just a collection of ones that I remember to put down.
7676
</p>
7777
<p id="paperCount">
78-
So far, we have read 173 papers. Let's keep it up!
78+
So far, we have read 174 papers. Let's keep it up!
7979
</p>
8080
<small id="searchCount">
81-
Your search returned 173 papers. Nice!
81+
Your search returned 174 papers. Nice!
8282
</small>
8383

8484
<div class="search-inputs">
@@ -105,6 +105,16 @@ <h1>Here's where I keep a list of papers I have read.</h1>
105105
</thead>
106106
<tbody>
107107

108+
<tr>
109+
<td>SphereFed: Hyperspherical Federated Learning</td>
110+
<td>Xin Dong et al</td>
111+
<td>2022</td>
112+
<td>federated learning</td>
113+
<td>Arxiv</td>
114+
<td>This paper presents a novel approach to addressing the non-i.i.d. (non-independent and identically distributed) data challenge in federated learning by introducing hyperspherical federated learning (SphereFed). The key insight is that instead of letting clients independently learn their classifiers, which leads to inconsistent learning targets across clients, they should share a fixed classifier whose weights span a unit hypersphere, ensuring all clients work toward the same learning objectives. The approach normalizes features to project them onto this same hypersphere and uses mean squared error loss instead of cross-entropy to avoid scaling issues that arise when working with normalized features. Finally, after federated training is complete, they propose a computationally efficient way to calibrate the classifier using a closed-form solution that can be computed in a distributed manner without requiring direct access to private client data.</td>
115+
<td><a href="https://arxiv.org/pdf/2207.09413" target="_blank">Link</a></td>
116+
</tr>
117+
108118
<tr>
109119
<td>A deeper look at depth pruning of LLMs</td>
110120
<td>Shoaib Ahmed Siddiqui et al</td>

0 commit comments

Comments
 (0)