AI applied. Use cases for the Microsoft AI Platform.



People don’t easily adapt to technology. They want technology to adapt to them, to help them get things done in a way that’s easy and natural, and that complements how they perceive the world around them, process information, and interact with their surroundings. Artificial Intelligence (AI) is all about amplifying human ingenuity with intelligent technology. Wherever your software company is with AI, Microsoft AI technologies can help make your apps more intelligent and innovative. Microsoft offers the following AI-powered services though the Azure cloud:

  • Build customer-facing AI applications with Cognitive Services Vision, Language, Speech, Knowledge, and Search functionality.
  • Create natural interaction with your users through your apps by building and connecting intelligent conversational bots with the Bot Framework.
  • Design better AI models faster and orchestrate your Machine Learning development cycle with the confidence that your data is protected with enterprise-grade security.
  • Gain market intelligence with Cortana Intelligence by transforming data into intelligent actions.




AI in the Real World

In the last 12 months, as AI technology evolved, I have had the opportunity to work on practical applications of Microsoft AI technologies to real world problems. I’d like to share with you some ideas that show how Azure Machine Learning, Bot Framework, and Cognitive Services can be applied to resolve a variety of challenges in different contexts.

  • Online payment fraud detection
  • Cache Hit Ratio to improved Web page load time
  • Emergency response dashboard


Online Payment Fraud Detection

The traditional approach to detection of frauds in online payments uses rules or logic statements to query transactions and to direct suspicious transactions through to human review. It is notable that over 90 percent of online fraud detection platforms still use this method, including platforms used by banks and payment gateways. While this is effective to some degree, in cases where there is a sufficient gap between an order being received and goods being shipped, it is also incredibly costly and far slower than alternatives.

The “rules” in these platforms use a combination of data, horizon-scanning, and gut-feel. The system is backed with manual reviews to confirm experts’ decisions. When a credit card data breach is detected, businesses recognize the increased risk of cards in that location as fraudulent, and can simply add a rule to review any transactions from those credit cards. Following this, every attempted purchase made by such a card raises an alert and is declined or reviewed. However, this raises two significant issues. The first is that such a generalized rule may turn away millions of legitimate customers, ultimately losing the business money and jeopardizing customer relations. Secondly, while this can deter future threats after such fraud has been found, it fails to identify or predict potential threats that businesses are not aware of.

These rules tend to produce binary results, deeming transactions as either good or bad and failing to consider anything in between. And until the rules are manually reviewed, the system will continue to prevent such transactions as those from the leaked credit cards, even if the risk or threat is no longer prominent.

Machine learning works on the basis of large, historical datasets that have been created using a collection of data across many clients and industries. This aggregation of data provides a highly accurate set of training data, and the access to this information allows businesses to choose the right model to optimize the levels of recall and precision that they provide: out of all the transactions, the model predicts those that might be fraudulent (recall), and what proportion of these actually are (precision).

The Machine Learning experiment that I employed analyzed hundreds of features that contribute, to varying extents, towards the fraud probability. The degree in which each feature contributes to the fraud score is not determined by a fraud analyst but is generated by the artificial intelligence of the machine which is driven by the training set. So, in regards to the leaked card data, if the use of those credit cards to commit fraud is proven to be high, the fraud weighting of a transaction that uses a compromised credit card will be equally so. However, if this were to diminish, the contribution level would parallel. Simply put, these models self-learn without explicit programming such as with manual review.

The online payment fraud detection service, built in Azure Machine Learning, is based on the One-Class Support Vector Model algorithm, which is an anomaly detection model. This module is particularly useful in scenarios where you have a lot of “normal” data and not many cases of the anomalies you are trying to detect. For example, if you need to detect fraudulent transactions, you might not have many examples of fraud that you could use to train a typically classification model, but you might have many examples of good transactions.




Cache Hit Ratio Optimization

In a multi-tier application, bottlenecks may occur at any of the connection points between two tiers: business logic and data access layers, client and service layers, presentation and storage layers, etc. Large-scale applications benefit from various levels of caching of information for improving performance and increasing scalability. But cache are expensive resources and have limited storage capability, so allocation of data in a cache should be a sensible decision aimed to provide as much data as possible directly from the cache itself to the client requesting it (hit), and limit occurrences of when data is not found (miss) and it has to be retrieved by the backing persistent repository.

To optimize the performance of a cache, you want to increase the hit ratio and decrease the miss ratio. There are different techniques for improving a cache performance, by pre-fetching data in cache on a regular basis, to just-in-time caching, or allocation of most used objects based on counters. Based on patterns of usage of objects, I have employed Machine Learning algorithms to predict the likelihood that an object is going to be used, and therefore it can be allocated in cache before a request is submitted, to increase the chance of a hit.




I have described a detailed application of this idea in the MSDN Magazine July 2017 issue:

Scale Applications with Microsoft Azure Redis and Machine Learning


Emergency Response

As an educational travel and placement organization, my company handles thousands of students every week that attend more than 150 educational institutions worldwide. Emergencies may happen with no notice, whether because of weather-related events, an accident, or even a terrorist attack. How do we react promptly and safeguard the security and safety of our students and staff around the world?

We have developed a system of multiple communication channels to reach out to students, inquire about their safety and security, and build emergency action plans for our staff to use when a rapid response is needed. The emergency response solution is built on Dynamics CRM for responding to emergencies and managing status and communication of affected students via a dashboard hosted in SharePoint. Integration between the two systems is guaranteed by workflows designed with Azure Logic Apps. And additional data is integrated into the process like the last known location via GPS units and Azure IoT Hub; automatic messages and calls in multiple languages are initiated from the CRM and replies processed with Azure Cognitive Services using text and voice recognition and translation; and a bot, built with the Bot Framework, is used for handling communication between students and school stuff, and assessing their safety status.




I have described the end-to-end solution more in detail in an article for MS Dynamics World.

Building an emergency response solution in Dynamics CRM, SharePoint, and the Microsoft Cloud


Next Steps

As the old saying goes, “the sky is the limit”. The potential of the Microsoft AI platform is huge, and it is just waiting to be used for more and more practical applications. As a Microsoft MVP, I feel compelled to share these experiences with the community and, hopefully, spark some new ideas.

Feel free to reach out on me to explore this conversation further.

Your next step is, the home page of the Microsoft AI platform.




0 Replies