Why do I see serialVersionUID or MLReadable or MLWritable errors #2562
Locked
maziyarpanahi
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Question
Why do I see an exception/error that has
serialVersionUID
inside the trace:Problem:
You are using a model in the wrong Apache Spark version. There are 6 annotators which you can either train and use in Apache Spark 2.3.x/2.4.x or Apache Spark 3.x, but you cannot train in one and use it in the other one. (This is due to the change of Scala 2.11 in Spark 2.3.x/2.4.x to 2.12 in Spark 3.x)
Solution:
.pretrained("NAME_OF_THE_MODEL")
and this happens, please report it. We probably forgot to update that model/pipeline for the newer Spark 3.x..pretrained()
which takes care of everything for you, or directly from Models HubNOTE: Every other model or pipeline provided by Spark NLP or previously saved on your disk by you is compatible with Apache Spark/PySpark 3.x over Scala 2.12, only the annotators mentioned above cannot be used in both Scala 2.11 and 2.12 at the same time.
Question
Why do I see an exception/error that has
MLReadable
orMLWritable
inside the trace:Problem:
Spark NLP provides multiple artifacts for each major Apache Spark version. For instance, if you have installed or using Apache Spark 2.3.x, you must use the Maven/Jar made for that specific version. This error means your Spark NLP from Maven or a Fat JAR is not matching your Apache Spark major version.
Solution:
pyspark==3.1.x or pyspark 3.0.x
you simply usespark=sparknlp.start()
without any parameterpyspark==2.4.x
you simply usespark=sparknlp.start(spark24=True)
to use the right packagepyspark==2.3.x
you simply usespark=sparknlp.start(spark23=True)
to use the right packageOne tip is to be sure you don't have any leftover in your
SPARK_HOME
especially on Windows assuming yourpip install pyspark==3.1.1
should work while you forgot to point yourSPARK_HOME
to Apache Spark 3.1.1.Beta Was this translation helpful? Give feedback.
All reactions