Skip to content

Commit b0f80a6

Browse files
jberkelmenshikh-iv
authored andcommitted
Fix spelling (#1625)
1 parent c220166 commit b0f80a6

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

docs/notebooks/FastText_Tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,7 @@
279279
"cell_type": "markdown",
280280
"metadata": {},
281281
"source": [
282-
"The word vector lookup operation only works if atleast one of the component character ngrams is present in the training corpus. For example -"
282+
"The word vector lookup operation only works if at least one of the component character ngrams is present in the training corpus. For example -"
283283
]
284284
},
285285
{
@@ -346,7 +346,7 @@
346346
"cell_type": "markdown",
347347
"metadata": {},
348348
"source": [
349-
"Similarity operations work the same way as word2vec. **Out-of-vocabulary words can also be used, provided they have atleast one character ngram present in the training data.**"
349+
"Similarity operations work the same way as word2vec. **Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.**"
350350
]
351351
},
352352
{

docs/notebooks/Word2Vec_FastText_Comparison.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -466,7 +466,7 @@
466466
"Both these subtractions would result in a very similar set of remaining ngrams.\n",
467467
"No surprise the fastText embeddings do extremely well on this.\n",
468468
"\n",
469-
"Let's do a small test to validate this hypothesis - fastText differs from word2vec only in that it uses char n-gram embeddings as well as the actual word embedding in the scoring function to calculate scores and then likelihoods for each word, given a context word. In case char n-gram embeddings are not present, this reduces (atleast theoretically) to the original word2vec model. This can be implemented by setting 0 for the max length of char n-grams for fastText.\n"
469+
"Let's do a small test to validate this hypothesis - fastText differs from word2vec only in that it uses char n-gram embeddings as well as the actual word embedding in the scoring function to calculate scores and then likelihoods for each word, given a context word. In case char n-gram embeddings are not present, this reduces (at least theoretically) to the original word2vec model. This can be implemented by setting 0 for the max length of char n-grams for fastText.\n"
470470
]
471471
},
472472
{
@@ -1081,4 +1081,4 @@
10811081
},
10821082
"nbformat": 4,
10831083
"nbformat_minor": 0
1084-
}
1084+
}

gensim/models/keyedvectors.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -633,7 +633,7 @@ def n_similarity(self, ws1, ws2):
633633
634634
"""
635635
if not(len(ws1) and len(ws2)):
636-
raise ZeroDivisionError('Atleast one of the passed list is empty.')
636+
raise ZeroDivisionError('At least one of the passed list is empty.')
637637
v1 = [self[word] for word in ws1]
638638
v2 = [self[word] for word in ws2]
639639
return dot(matutils.unitvec(array(v1).mean(axis=0)), matutils.unitvec(array(v2).mean(axis=0)))

0 commit comments

Comments
 (0)