Skip to content

Commit 86e0618

Browse files
committed
Merge branch 'release-3.0.1'
2 parents 351bdef + 1c26225 commit 86e0618

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+394
-255
lines changed

CHANGELOG.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,38 @@
11
Changes
22
===========
3+
## 3.0.1, 2017-10-12
4+
5+
6+
:red_circle: Bug fixes:
7+
* Fix Keras import, speedup importing time. Fix #1614 (@menshikh-v, [#1615](https://github.com/RaRe-Technologies/gensim/pull/1615))
8+
* Fix Sphinx warnings and retreive all missing .rst (@anotherbugmaster and @menshikh-iv, [#1612](https://github.com/RaRe-Technologies/gensim/pull/1612))
9+
* Fix logger message in lsi_dispatcher (@lorosanu, [#1603](https://github.com/RaRe-Technologies/gensim/pull/1603))
10+
11+
12+
:books: Tutorial and doc improvements:
13+
* Fix spelling (@jberkel, [#1625](https://github.com/RaRe-Technologies/gensim/pull/1625))
14+
15+
:warning: Deprecation part (will come into force in the next release)
16+
* Remove
17+
- `gensim.examples`
18+
- `gensim.nosy`
19+
- `gensim.scripts.word2vec_standalone`
20+
- `gensim.scripts.make_wiki_lemma`
21+
- `gensim.scripts.make_wiki_online`
22+
- `gensim.scripts.make_wiki_online_lemma`
23+
- `gensim.scripts.make_wiki_online_nodebug`
24+
- `gensim.scripts.make_wiki`
25+
26+
* Move
27+
- `gensim.scripts.make_wikicorpus` ➡ `gensim.scripts.make_wiki.py`
28+
- `gensim.summarization` ➡ `gensim.models.summarization`
29+
- `gensim.topic_coherence` ➡ `gensim.models._coherence`
30+
- `gensim.utils` ➡ `gensim.utils.utils` (old imports will continue to work)
31+
- `gensim.parsing.*` ➡ `gensim.utils.text_utils`
32+
33+
Also, we'll create `experimental` subpackage for unstable models. Specific lists will be available in the next release.
34+
35+
336
## 3.0.0, 2017-09-27
437

538

continuous_integration/travis/flake8_diff.sh

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -153,3 +153,6 @@ else
153153
jupyter nbconvert --to script --stdout ${fname} | flake8 - --show-source --ignore=E501,E731,E12,W503,E402 --builtins=get_ipython || true
154154
done
155155
fi
156+
157+
echo "Build documentation"
158+
pip install .[docs] && cd docs/src && make clean html

docs/notebooks/FastText_Tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -279,7 +279,7 @@
279279
"cell_type": "markdown",
280280
"metadata": {},
281281
"source": [
282-
"The word vector lookup operation only works if atleast one of the component character ngrams is present in the training corpus. For example -"
282+
"The word vector lookup operation only works if at least one of the component character ngrams is present in the training corpus. For example -"
283283
]
284284
},
285285
{
@@ -346,7 +346,7 @@
346346
"cell_type": "markdown",
347347
"metadata": {},
348348
"source": [
349-
"Similarity operations work the same way as word2vec. **Out-of-vocabulary words can also be used, provided they have atleast one character ngram present in the training data.**"
349+
"Similarity operations work the same way as word2vec. **Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.**"
350350
]
351351
},
352352
{

docs/notebooks/Word2Vec_FastText_Comparison.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -466,7 +466,7 @@
466466
"Both these subtractions would result in a very similar set of remaining ngrams.\n",
467467
"No surprise the fastText embeddings do extremely well on this.\n",
468468
"\n",
469-
"Let's do a small test to validate this hypothesis - fastText differs from word2vec only in that it uses char n-gram embeddings as well as the actual word embedding in the scoring function to calculate scores and then likelihoods for each word, given a context word. In case char n-gram embeddings are not present, this reduces (atleast theoretically) to the original word2vec model. This can be implemented by setting 0 for the max length of char n-grams for fastText.\n"
469+
"Let's do a small test to validate this hypothesis - fastText differs from word2vec only in that it uses char n-gram embeddings as well as the actual word embedding in the scoring function to calculate scores and then likelihoods for each word, given a context word. In case char n-gram embeddings are not present, this reduces (at least theoretically) to the original word2vec model. This can be implemented by setting 0 for the max length of char n-grams for fastText.\n"
470470
]
471471
},
472472
{
@@ -1081,4 +1081,4 @@
10811081
},
10821082
"nbformat": 4,
10831083
"nbformat_minor": 0
1084-
}
1084+
}

0 commit comments

Comments
 (0)