You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A repository comparing the inference accuracy of MindNLP and Transformer on the GLUE QNLI dataset
3
+
4
+
+## Dataset
5
+
+ The QNLI (Question Natural Language Inference) dataset is part of the GLUE benchmark. It is converted from the Stanford Question Answering Dataset (SQuAD).
+4. Place the following files in the `mindnlp/benchmark/GLUE-QNLI/` directory:
12
+
+- dev.tsv (Development set)
13
+
+- test.tsv (Test set)
14
+
+- train.tsv (Training set)
15
+
+
16
+
+ The QNLI task is a binary classification task derived from SQuAD, where the goal is to determine whether a given context sentence contains the answer to a given question.
17
+
18
+
## Quick Start
19
+
20
+
### Installation
21
+
To get started with this project, follow these steps:
22
+
23
+
1.**Create a conda environment (optional but recommended):**
24
+
```bash
25
+
conda create -n mindnlp python==3.9
26
+
conda activate mindnlp
27
+
2. **Install the dependencies:**
28
+
Please note that mindnlp is in the Ascend environment, while transformers is in the GPU environment, and the required dependencies are in the requirements of their respective folders.
29
+
```bash
30
+
pip install -r requirements.txt
31
+
3. **Usage**
32
+
Once the installation is complete, you can choose use differnet models to start inference. Here's how to run the inference:
33
+
```bash
34
+
# Evaluate specific model using default dataset (dev.tsv)
0 commit comments