-
Notifications
You must be signed in to change notification settings - Fork 22
Processors
Xatkit embeds processors: additional pieces of logic that can be plugged to tune the intent recognition process. Pre-processors operate on the user input to optimize it before intent extraction (e.g. by adding a question mark at the end of a sentence that is obviously a question). Post-processors are designed to operate after the intent recognition process, usually to set additional context parameters (e.g. perform some sentiment analysis, set whether the input is a yes/no question, etc).
Unless explicitly stated in the documentation, Xatkit processors can be used with any intent recognition engine. The bot properties file accepts the following keys to enable pre/post processors:
xatkit.recognition.preprocessors = PreProcessor1, PreProcessor2
xatkit.recognition.postprocessors = PostProcessor1, PostProcessor2
The value of each property is a comma-separated list of processor names (see the table below for the list of processors embedded in Xatkit and their names).
📚 You can also directly use the processor class to configure your bot programmatically:
import static com.xatkit.core.recognition.IntentRecognitionProviderFactoryConfiguration.*; [...] Configuration configuration = new BaseConfiguration(); // Bot-specific configuration (NLP engine, database, etc) configuration.addProperty(RECOGNITION_PREPROCESSORS_KEY, "PostProcessor1, PostProcessor2");
Data extracted from processors is attached to the current intent and stored in the state context. context.getIntent().getNlpData()
is a map containing all the information extracted by Xatkit processors for the current intent.
The code below shows how to access the property nlp.stanford.isYesNo
(extracted by the IsEnglishYesNoQuestion
post-processor) and use it to tune the bot behavior:
state("myState")
.body(context -> {
if((Boolean) context.getIntent().getNlpData().get("nlp.stanford.isYesNo")) {
// Post a reply matching a yes/no question
} else {
// Post a generic reply
}
})
.next()
[...]
Name | Description | Requirements |
---|---|---|
SpacePunctuationPreProcessor |
Adds spaces around punctuation when needed (e.g. from "?" to " ?"). This processor is enabled by default when using the NlpjsIntentRecognitionProvider . |
Name | Description | Requirements |
---|---|---|
RemoveEnglishStopWords |
Removes English stop words from recognized intent's parameter values that have been extracted from any entities. This processor helps normalizing DialogFlow values when using any entities. |
|
IsEnglishYesNoQuestion |
Sets the parameter nlp.stanford.isYesNo to true if the user input is a yes/no question, and false otherwise |
See Stanford CoreNLP configuration |
EnglishSentiment |
Sets the parameter nlp.stanford.sentiment to a value in ["Very Negative", "Negative", "Neutral", "Positive", "Very Positive"] corresponding to the sentiment extracted from the user input. |
See Stanford CoreNLP configuration |
TrimParameterValuesPostProcessor |
Removes leading/trailing spaces in extracted parameter values (e.g. from "Barcelona " to "Barcelona"). This processor is enabled by default when using the NlpjsIntentRecognitionProvider . |
|
TrimPunctuationPostProcessor |
Removes punctuation in extracted parameter values (e.g. from "Barcelona!" to "Barcelona"). This processor is enabled by default when using the NlpjsIntentRecognitionProvider . |
Stanford CoreNLP is not embedded by default in Xatkit. Add the following dependencies in your bot's pom.xml
if you want to use a Stanford CoreNLP processor:
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.9.2</version>
<exclusions>
<exclusion>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.9.2</version>
<classifier>models</classifier>
<exclusions>
<exclusion>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
</exclusion>
</exclusions>
</dependency>
📚 You can use models for specific language by adapting the classifier. For example
<classifier>models-chinese</classifier>
imports the chinese models.
- Getting Started
- Configuring your bot
- Integrating an Intent Recognition Provider
- Adding a bot to your website
- Deploying on Slack
- Basic concepts
- Intents and Entities
- States, Transitions, and Context
- Default and Local Fallbacks
- Core Library