Form Recognizer now reads more languages, processes IDs and invoices, trains on tables, and more
Published Mar 03 2021 08:20 AM 8,754 Views

Documents contain invaluable information powering core business processes. Extracting information from these documents with minimum manual intervention helps bolster organizational efficiency and productivity. As more and more processes and workflows get automated, the need for new features to help extract text and structures increases.

Today, we are excited to announce newest updates to Form Recognizer that will be available on March 15, 2021.


What’s New?

Form Recognizer v2.1 public preview 3 will be available on March 15, 2021, and it will include:


Extract data from invoices

Invoices are complex documents that vary in structure and contain data that is vital to organizations business processes. One of the most challenging tasks in extracting data from invoices is extracting data from invoices line items.  The Form Recognizer Invoice API now supports line-item extraction, it also extracts now the full line item and its parts – description, amount, quantity, product ID, date and more. With a simple API \ SDK call you can extract all the data from your invoices – text, tables, key value pairs and line items.



Figure 1 Line items are extracted from invoices


Extract data from IDs

The new pre-built ID model enables customers to take worldwide passports and U.S. drivers license and return structured data representing the information available on the IDs. The new ID API extract the text and values of interest from IDs such as document number, last name, first name, date of expiration, country and more.   



Figure 2 Pre-built ID model can extract information from passports and US drivers licenses


Supervised table labeling and training, empty-value labeling

In addition the Form Recognizer state of the art deep learning automatic table extraction capabilities it now also enables customer to train and label tables. This new release includes the ability to label line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained and documents are analyzed using this model the new line items will be extracted as part of the JSON output in the documentResults section.



Figure 3 Label tables in your training dataset


In addition to labeling tables, you can now label empty values and regions; if some documents in your training set do not have values for some fields, you can use this so that your model will know to extract values properly from analyzed documents.





Natural reading order, handwriting classification, and page selection

With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and top-to-bottom ordering. Use the new readingOrder query parameter to “natural” value for a more human-friendly reading order output as shown in the following example. Note the first column’s text lines output in order before listing the second, and the third, column.



In addition, for Latin languages, Form Recognizer will classify Latin-languages only text lines as handwritten style or not and give a confidence score, as seen below.






Furthermore, when analyzing a multi-page PDF or TIFF, you can now specify which pages you want to analyze.


Pre-built Receipt model quality improvements

This new update includes a number of quality improvements for the pre-built Receipt model, especially around line item extraction.


Our Customers & Partners


chril1_0-1614714476433.pngAvidXchange has developed an account payable automation solution leveraging Form Recognizer. “By partnering with Microsoft, we’re able to deliver an accounts payable automation solution for the middle market that’s truly powered by machine learning,” said Chris Tinsley, Chief Technology Officer at AvidXchange. “Our customers will benefit from faster invoice processing times and increased accuracy so we can help ensure their suppliers are paid the right amount, at the right time.”

chril1_1-1614714476450.pngWEX has developed a tool to process Explanation of Benefits documents using Form Recognizer. Matt Dallahan, Senior Vice President of Product Management and Strategy, said “The technology is truly amazing. I was initially worried that this type of solution would not be feasible, but I soon realized that the Form Recognizer can read virtually any document with accuracy.”



chril1_3-1614714476570.pngGEP has developed an invoice processing solution for a client using Form Recognizer. “At GEP, we are seeing AI and automation make a profound impact on procurement and the supply chain. By combining our AI solution with Microsoft Form Recognizer, we automated the processing of 4,000 invoices a day for a client, saving them tens of thousands of hours of manual effort, while improving accuracy, controls and compliance on a global scale,” said Sarateudu Sethi, GEP’s Vice President of Artificial Intelligence.


chril1_4-1614714476596.png “At Cross Masters, using cutting-edge AI technologies is not only a passion, it is an essential part of our work culture that requires continuous innovation. One of our latest success stories is automation of manual paperwork, required to process thousands of invoices. Thanks to Microsoft Form Recognizer’s AI engine we were able to develop a unique customized solution that provides to our clients market insights from large set of collected invoices. What we find the most convenient is human beating extraction quality and continuous introduction of new features, such as model composing or table labelling. This assures our client’s market advantage and helps our product to be the best-in-class solution” Jan Hornych, Head of Marketing Automation, Cross Masters


Try out Form Recognizer

To get started with Form Recognizer, please login to the Azure Portal to create a Form Recognizer resource. Once your resource is created, you can start exploring Form Recognizer, with the improvements mentioned above coming on March 15. You can learn more about Form Recognizer here.
























Version history
Last update:
‎Mar 04 2021 03:59 PM
Updated by: