machine learning
89 TopicsAI-900: Microsoft Azure AI Fundamentals Study Guide
This comprehensive study guide provides a thorough overview of the topics covered in the Microsoft Azure AI Fundamentals (AI-900) exam, including Artificial Intelligence workloads, fundamental principles of machine learning, computer vision and natural language processing workloads. Learn about the exam's intended audience, how to earn the certification, and the skills measured as of April 2022. Discover the important considerations for responsible AI, the capabilities of Azure Machine Learning Studio and more. Get ready to demonstrate your knowledge of AI and ML concepts and related Microsoft Azure services with this helpful study guide.38KViews11likes3CommentsStarting your Kaggle challenge using Azure Machine Learning Services
One of the main advantages of Azure ML is the ability to do hyperparameter optimization by scheduling experiments. So have you tried this this with dataset hosted on Kaggle? Kaggle has over 50,000 public datasets and 400,000 public notebooks to conquer any analysis in no time. Kaggle does offers a no-setup, customizable, Jupyter Notebooks environment. Access GPUs at no cost to you and a huge repository of community published data & code. However, there are times when you want to build your experiment using Azure and an Azure ML workspace in the azure portal.4KViews6likes0CommentsMicrosoft Learn AI Skills Challenge
Join Microsoft's AI Skills Challenge 2023 to enhance your technical expertise in Artificial Intelligence. Register now to access exclusive resources, hands-on labs, and interactive learning sessions. Boost your knowledge in generative AI, machine learning, cognitive services, natural language processing, and computer vision to stay ahead in the ever-evolving world of AI.34KViews5likes6Comments120 Days Study Plan to Become an AI-Focused Full-Stack Software Engineer
Hello there, my name is Oumaima, and I am an MLSA student ambassador from Morocco, studying at the University Of The People. Welcome to the first step in my exciting, unpredictable journey, one I’ve chosen to embark on with you! For the past three years, I’ve watched the AI industry evolve dramatically. Generative AI has shifted from a fascinating experiment to an integral part of our everyday lives, whether at school, work, or even in our personal routines. In fact, my ChatGPT app is now my go-to therapist, lawyer, and all-around advisor! As a software engineering student for over three years, I’ve seen the growth of generative AI up close. But this shift didn’t just inspire me; it made me realize that I don’t want to remain only a consumer of this technology. I want to contribute to it! Seeing AI’s ability to mimic human thought, draw connections from vast amounts of information, and deliver impressive results sparked something in me. It showed me that the best way to break into AI might just be to use AI itself as my guide. That’s when the idea came to ask ChatGPT O1-preview for a personalized study plan, crafted uniquely for me. It takes into account my available time, coding background, learning preferences, mental health, and energy. Here’s how my journey began with a simple prompt: I want to become an AI-focused full-stack software engineer and have 120 days to dedicate to this goal. Please create a detailed 120-day study plan tailored for me, dedicating 3-4 hours daily. The study plan should: - Cover all essential topics including programming foundations, data structures and algorithms (DS&A), mathematics for AI, machine learning fundamentals, deep learning, advanced AI topics, integrating AI into applications, web development basics for AI integration, advanced web development, full-stack project development, scripting, DevOps, and career development. - Include weekly breakdowns and daily tasks. - Provide recommended resources for each topic (e.g., online courses, tutorials, documentation). - Suggest hands-on projects or exercises to apply the concepts learned. - Incorporate tips for success, such as active engagement, seeking feedback, balancing depth and breadth, and maintaining well-being. - Emphasize developing all the skills that will make me an irreplaceable software developer, including scripting and DevOps skills. - Conclude with a summary and final advice. Please ensure the plan is structured, comprehensive, and practical for someone balancing work and study. Then it generated the following plan, that I tried to follow by using Microsoft Learn learning paths that offer in depth trainings on each topic I got: Days 1–25: Programming Foundations & Data Structures and Algorithms (DS&A) Microsoft Learn path suggestion: Python for beginners Days 26–50: Mathematics for AI & Machine Learning Fundamentals Microsoft Learn path suggestion: Introduction to machine learning Days 51–80: Deep Learning & Advanced AI Topics Microsoft Learn path suggestion: Train and evaluate deep learning models Days 81–100: Integrating AI into Applications Microsoft Learn path suggestion: Microsoft Azure AI Fundamentals: Generative AI Days 101–115: Advanced Web Development & Full-Stack Project Development Microsoft Learn path suggestion: Build an AI web app by using Python and Flask Days 116–120: Portfolio Projects and Industry Trends. Not going to lie, the roadmap turned out to be even more exciting than I’d expected! When I asked for it, I specified that it should guide me through developing problem-solving skills directly tied to full-stack development. I wanted a path that not only sharpens my abilities but also allows me to build interesting, hands-on applications where I can see the results of what I’m learning. And now, my friends, the journey has officially begun! I’ll be following the roadmap closely, documenting my weekly progress to learn AI, noting the challenges, and celebrating the accomplishments. The goal is to see if artificial intelligence can really help create a customized study plan that aligns with my personal goals, circumstances, and unique learning rhythm. So, stay tuned — this is only the beginning! See you in my first step with DSA!6.7KViews4likes4CommentsAnalyzing Earth's Climate with Capstone Projects
Imagine if we knew when or why a heatwave is approaching? This is not possible today but building effective ways to analyze climate projection models like this capstone team did with NASA can bring researchers closer to answers.4.3KViews4likes0CommentsBuild your first ML-Model with ML.NET Model Builder
Excited to dive into machine learning in .NET? With the aid of tools like ML.NET Model Builder and Visual Studio, it's a breeze. Here's a preview of the steps you'll take: 1. Download Visual Studio 2022 with .NET desktop development and ML.NET Model Builder. 2. Create a .NET console app named myMLApp. 3. Add a machine learning model named SentimentModel.mbconfig. 4. Choose the Data classification scenario. 5. Select Local (CPU) as the training environment. 6. Prepare and import your data. 7. Train the model. 8. Evaluate its performance. 9. Consume the model using provided code. 10. Run and debug to observe the results. Now you're all set to leverage ML.NET's prowess for predictive models in your .NET apps!13KViews3likes0CommentsPower Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to: Speak commands or input instead of typing, making the interface more accessible and user-friendly. Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio. Provide a more natural and hands-free experience especially on devices like smartphones or tablets. In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features. If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here. Deploy Azure AI Speech service in Azure. Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup. After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI. Settings things up in Open Web UI Speech to Text settings (STT) Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below. Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank. Text to Speech settings (TTS) Now, let's proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank. You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below: Click Save to reflect the change. Expected Result Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button. The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input. To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI. The TTS Engine should reflect Azure AI Speech service! Conclusion And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine. From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.808Views2likes1Comment