Opendata, web and dolomites


Teaser, summary, work performed and final results

Periodic Reporting for period 2 - BITEXT (Building conversational chatbots faster using NLP and machine learning)


Chatbots are still in their infancy in 2019. One of the factors that are currently limiting the development of the chatbots market is the lack of appealing customer experience in terms of chatbot accuracy and language availability. The low quality of the existing natural...


Chatbots are still in their infancy in 2019. One of the factors that are currently limiting the development of the chatbots market is the lack of appealing customer experience in terms of chatbot accuracy and language availability. The low quality of the existing natural language understanding (NLU) systems negatively impacts on user experience and on the growth of the entire language-based AI sector. One of the main reasons for this poor performance of NLU engines is motivated by lack of training data, particularly for languages other than English. At BITEXT, we have developed a Natural Language Processing (NLP)-based solution that trains conversational bots easily and makes their replies more human-like. Using our Deep Linguistic Analysis Platform (DLAP) and Natural Language Generation (NLG), we improve 2x bot accuracy. This allows existing chatbot frameworks (backend content, chatbot agencies, API integrators and NLP bot engines) to deal with user requests without redesigning their architectures, as we are fully compatible. The overall goal of the present project is to accelerate bot training and improve human-machine understanding – being the most accurate technology- in this new era of people and machines communication. Also, the project wants to raise chatbots full-potential, facilitating their development and growth by being a reliable technology, easy to develop (minimizing manual work) and embeddable into existing solutions and those new to come. Chatbots will assure 24/7 availability, bringing increased response capacity, improving customer support, streamlining inquiries and boosting customer intelligence. Shortly, the main advantages would be:
- Customer care improvement
- Purchase process simplification
- Personalized service
- Resource saving
- User experience improvements
- Improved customer intelligence

Work performed

1.Specifications, architecture and design
Formal description of the developments to be carried out in the project. Main functional and non-functional elements that will be part of the final system have been described at a high level in the System Specification deliverable.
The high-level requirements from the System Specification have been detailed at a lower level in the System Design, where specific decisions have been made.

2.Lexical resources
Lexical-morphological dictionaries for every language developed (English, French, Spanish, Italian, German, Dutch, Portuguese, Swedish, Danish).

3.Syntactic resources
Analysis Grammars for English, French, Spanish, Italian, German, Dutch, Portuguese, Swedish, Danish completed.
Generation grammars for the same languages completed too.
The software for analyzing according to the syntactic resources is in its final version. Testing and fine-tuning for all grammars has been completed.

4.Semantic resources
Development of an ontology for every vertical (Home, Media, E-commerce) in English and Spanish. The ontologies define relevant sets of words from the point of view of their meaning/role into the specific use case of a vertical. The ontologies for the rest of languages in the Home vertical have also been completed.
Frame definition files for the same vertical and language combinations have been finished. Frame definition files formalize the potential meaning of the sentences.
The software for NLU is in its final version. Testing and fine-tuning of the semantic resources and frame files have been completed.

5.Integrations with bots
Endpoints have been developed in the BITEXT API to provide the NLU analysis needed by chatbots. Agents in Dialogflow and Rasa have been deployed. Final versions are available.

6. Testing
Annotated training and testing corpora have been created. NLU agents for the different combinations of vertical and language have been tested in both platforms and in 3 versions: Standard version, Version using Query Simplification, Version using Variants Generation. Tests have been performed in two phases, preliminary and final. Between the preliminary and the final version the whole system has been refined and fine-tuned. Test results have been gathered. Conclusions are that the use of Query Simplification or Variants Generation improves the results obtained with the Standard version. The use of Variants Generation reduces the effort required to produce training and testing data sets.

7. Communication, IPR & Commercialization
Attendance to various chatbot related fairs and conferences and publications in social networks increased Bitext presence in the market. Renewed Bitext web site focused on STD & NLG. For IPR, Copyright and Trade Secret measures already taken are to be continued. The activities carried out throughout the project led us to a more detailed definition of the initial product. We have defined a business innovation plan -objectives, targets and strategy- with an estimation of cost and revenues. The commercial activities for the next months will be accompanied by a communication plan. The project has also led to see a clear opportunity for conversational businesses in Europe, like e-commerce or customer support. Scarcity of training and testing data is blocking the development of assistant’s technologies. If all languages in Europe are going to be spoken by chatbots, a different technology paradigm is needed, and we think artificial data is the answer. If AI doesn’t speak European languages, that will exclude many European citizens from technical progress (namely, all those who don’t speak English with a “good enough” accent).

Final results

Most modern conversational Chatbot platforms are built on the Natural Language Understanding (NLU) approach of intent detection and slot filling. For any given user utterance, the system will try to determine what the user intent is about. All intents supported by the bot must be defined by the developer, and each intent may have several “slots” that may be required. Those slots need to be “filled” with compatible entities for the bot to be able to perform the desired action. In this context, the Chatbot platform requires the developer to not only specify the intents, slots and entities, but it also requires a large amount of training data, which consists of user utterances tagged with the intent and entities; in particular, all different ways in which the same intent may be expressed need to be present in the training data. This is definitively a time-consuming and expensive task, still performed by hand.
There is currently no automated way of generating such data; instead, it must be generated manually, usually by crowdsourcing examples of typical user utterances, and then manually tagging them with the appropriate intent, entities and slots. By contrast, what Bitext has achieved is a process that allows the automatic generation of the necessary tagged training data for the bot, without having to do so manually. The system has been designed as an economically competitive solution, leading faster training (months to weeks), with higher accuracies (70% to 90%).
In our view, the challenge is so critical and the solution so feasible after this project that we want to encourage EU authorities to promote the creation of a think tank to work around the idea of a “European Alexa”, so all European citizens and languages are safely included in this technology trend, so we can shop or get self-service tools in our own languages.

Website & more info

More info: