Skip to content

Commit 57e2b24

Browse files
committed
made changes to competition page
1 parent f3d0359 commit 57e2b24

File tree

1 file changed

+22
-22
lines changed

1 file changed

+22
-22
lines changed

competition.html

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -59,41 +59,41 @@ <h2 class="text-2xl font-semibold mb-2 text-black">Motivation</h2>
5959
<!-- task definition -->
6060
<section id="task-definition" class="mb-6">
6161
<h2 class="text-2xl font-semibold mb-2 text-black">Task Definition</h2>
62-
<p class="mb-4 text-black">In this competition, participants will design an algorithm to efficiently train CLIP models \cite{pmlr-v139-radford21a} in a limited-resource setting. The designed algorithm needs to be implemented and will be run on training datasets of different sizes on a small number of GPUs (e.g., 8GPUs). The algorithms will then be ranked according to the evaluation performance of the trained models. In order to reduce the cost of participation, the participants only need to design and implement their algorithms, and we will provide resources to run their algorithms.</p>
62+
<p class="mb-4 text-black">In this competition, participants will design an algorithm to efficiently train <a href="https://arxiv.org/abs/2103.00020" class="text-blue-600 underline hover:text-blue-800">CLIP models</a> in a limited-resource setting. The designed algorithm needs to be implemented and will be run on training datasets of different sizes on a small number of GPUs (e.g., 8GPUs). The algorithms will then be ranked according to the evaluation performance of the trained models. In order to reduce the cost of participation, the participants only need to design and implement their algorithms, and we will provide resources to run their algorithms.</p>
6363
</section>
6464

6565
<!-- training setting -->
6666
<section id="training-setting" class="mb-6">
6767
<h2 class="text-2xl font-semibold mb-2 text-black">Training Setting</h2>
68-
<p class="mb-4 text-black">Each submission will be run in three different settings, which differ in the size of the training data. Based on the training data size, we name the three settings as small (1 million training data), medium (10 million) and large (100 million). Dataset of smaller scale is a subset of that of larger scales, which are all subsets of the DFN-2B dataset \cite{fang2024data}. Other components of training, including number of samples seen, batch size, etc., will be fixed across different settings. We provide more detail of different training settings in the table below.</p>
68+
<p class="mb-4 text-black">Each submission will be run in three different settings, which differ in the size of the training data. Based on the training data size, we name the three settings as small (1 million training data), medium (10 million) and large (100 million). Dataset of smaller scale is a subset of that of larger scales, which are all subsets of the <a href="https://arxiv.org/abs/2309.17425" class="text-blue-600 underline hover:text-blue-800">DFN-2B dataset</a>. Other components of training, including number of samples seen, batch size, etc., will be fixed across different settings. We provide more detail of different training settings in the table below.</p>
6969

70-
<table class="min-w-full bg-white mb-4 border text-black">
71-
<thead>
70+
<table class="min-w-full bg-white mb-4 border text-black text-center border-collapse">
71+
<thead class="border-b-2 border-gray-500">
7272
<tr>
73-
<th class="py-2 px-4 border-b">Scale</th>
74-
<th class="py-2 px-4 border-b">Dataset Size</th>
75-
<th class="py-2 px-4 border-b">Samples Seen</th>
76-
<th class="py-2 px-4 border-b">Model</th>
77-
<th class="py-2 px-4 border-b">Batch Size/GPU</th>
78-
<th class="py-2 px-4 border-b">GPUs</th>
73+
<th class="py-2 px-4 border border-gray-500">Scale</th>
74+
<th class="py-2 px-4 border border-gray-500">Dataset Size</th>
75+
<th class="py-2 px-4 border border-gray-500">Samples Seen</th>
76+
<th class="py-2 px-4 border border-gray-500">Model</th>
77+
<th class="py-2 px-4 border border-gray-500">Batch Size/GPU</th>
78+
<th class="py-2 px-4 border border-gray-500">GPUs</th>
7979
</tr>
8080
</thead>
81-
<tbody>
81+
<tbody class="divide-y divide-gray-500">
8282
<tr>
83-
<td class="py-2 px-4 border-b">Small</td>
84-
<td class="py-2 px-4 border-b">1 million</td>
85-
<td class="py-2 px-4 border-b" rowspan="3">1 billion</td>
86-
<td class="py-2 px-4 border-b" rowspan="3">ViT-B/32</td>
87-
<td class="py-2 px-4 border-b" rowspan="3">4096</td>
88-
<td class="py-2 px-4 border-b" rowspan="3">8x H100</td>
83+
<td class="py-2 px-4 border border-gray-500">Small</td>
84+
<td class="py-2 px-4 border border-gray-500">1 million</td>
85+
<td class="py-2 px-4 border border-gray-500" rowspan="3">1 billion</td>
86+
<td class="py-2 px-4 border border-gray-500" rowspan="3">ViT-B/32</td>
87+
<td class="py-2 px-4 border border-gray-500" rowspan="3">4096</td>
88+
<td class="py-2 px-4 border border-gray-500" rowspan="3">8x H100</td>
8989
</tr>
9090
<tr>
91-
<td class="py-2 px-4 border-b">Medium</td>
92-
<td class="py-2 px-4 border-b">10 million</td>
91+
<td class="py-2 px-4 border border-gray-500">Medium</td>
92+
<td class="py-2 px-4 border border-gray-500">10 million</td>
9393
</tr>
9494
<tr>
95-
<td class="py-2 px-4 border-b">Large</td>
96-
<td class="py-2 px-4 border-b">100 million</td>
95+
<td class="py-2 px-4 border border-gray-500">Large</td>
96+
<td class="py-2 px-4 border border-gray-500">100 million</td>
9797
</tr>
9898
</tbody>
9999
</table>
@@ -102,7 +102,7 @@ <h2 class="text-2xl font-semibold mb-2 text-black">Training Setting</h2>
102102
<!-- evaluation -->
103103
<section id="evaluation" class="mb-6">
104104
<h2 class="text-2xl font-semibold mb-2 text-black">Evaluation Metric</h2>
105-
<p class="mb-4 text-black">The performance of the trained models will be evaluated using the <a href="https://github.com/mlfoundations/datacomp" class="text-blue-500 hover:underline">DataComp</a> benchmark. We also keep track of ImageNet-1K Top 1 zero-shot accuracy. We will evaluate submissions as soon as we can and release leaderboard updates on a weekly basis.</p>
105+
<p class="mb-4 text-black">The performance of the trained models will be evaluated using the <a href="https://github.com/mlfoundations/datacomp" class="text-blue-600 underline hover:text-blue-800">DataComp</a> benchmark. We also keep track of ImageNet-1K Top 1 zero-shot accuracy. We will evaluate submissions as soon as we can and release leaderboard updates on a weekly basis.</p>
106106
</section>
107107

108108
<!-- baseline and resources -->

0 commit comments

Comments
 (0)