Machine learning
20 TopicsAccelerating Azure Databricks Runtime for Machine Learning
Data Scientists love working in the Azure Databricks environment when developing their machine learning and artificial intelligence models. By simply installing Intel-optimized versions of scikit-learn and TensorFlow as described in this blog post, they can see potentially large performance gains that will save them time and money.Pre-train and Fine-tune Language Model with Hugging Face and Gaudi HPU.
In this blog, we provide a general guideline of pre-training and fine-tuning language models using Hugging Face. For illustration, we use pre-training language models for question generation (answer-agnostic) in Korean as running example.13KViews0likes0CommentsAdding multi-language support for Azure AI applications quickly
Thereis a growing demand for applications which support speech, language identification, translation or transliteration from one language to another. Common questions that customersencounteras they considerpossible solutionsare: How do we identify which language is being spoken/typed by the user? How dowetranslate one language to another for the user? Is there a way to transliterate text from one language to another? Canwedetermine the intent ofauser utterance? How do we incorporate natural language understanding intoourchatbot application for multiple languages? Complexproblems such as thesecannowbesolved using advancedAPIsthat are readily available without having to reinvent the wheel–no machine learning expertise required! This blogstarts off with a brief introductiontomachine translation andthen explores various topics like identifying thelanguageandhow to perform translation/transliteration ofspokenortyped textusing Microsoft’s Translator Text API.In addition,wealso discuss how translated or transliterated text can be integrated with LIUS.Building a digital guide dog for railway passengers with impaired vision
Catching your train on time can be challenging under the best of circumstances. Trains typically only stop for a few minutes, leaving little room for mistakes. For example, at Munich Main station around 240 express trains and 510 regional trains leave from 28 platforms per day. Some trains can also be quite long, up to 346 meters (1,135 ft) for express ICE trains. It is extremely important to quickly find the correct platform and platform section, and then the door closest to a reserved seat needs to be located. This already challenging adventure becomes even more so, if a vision impairment forces a customer to rely exclusively on auditory or tactile feedback. When traveling autonomously, without assistance, it is common practice to walk along the outside of a train, continuously tapping it with a white cane, to discover opened and closed doors (figure 1). While this works in principle, this practice has limitations, both in terms of speed and reliability. We therefore partnered with DB Systel GmbH, the digital partner for all Deutsche Bahn Group companies, to build the Digital Guide Dog. This is a feasibility study based on an AI-powered smartphone application that uses computer vision, auditory and haptic feedback to guide customers to the correct platform section and train car door. In this blog post, we are sharing some of the details and unique challenges that we experienced while the AI model behind this application.6.9KViews5likes2Comments