powerbi
22 TopicsData Driven Analytics for Responsible Business Solutions, learning how to work with Power BI
Introduction In this blog post, we will be showcasing the project that we have worked on for the last couple of weeks. Here, we analysed a dataset using Power BI and its machine learning capabilities. For this, we were given the fictitious case of VenturaGear. The company was faced with the challenge of new competition, and it was our job to provide a data-driven insight into customer behaviour, feedback, and preferences. The objective was to support more effective customer targeting by identifying patterns and segments that could inform strategic decision-making, while ensuring ethical and responsible use of data. Before we jump into the course and our final results, we would like to introduce ourselves and the roles we had. Product Owner: Kylie Eggen Hello everyone! My name is Kylie, and I'm currently busy finishing my Master Responsible Digitalisation. During the DARBS course, I had the role of the product owner. This allowed me to develop a deeper understanding of both data analysis and the ethics of handling sensitive data. The course provides you with skills that could be useful in your future career, which is very nice. I liked the learning experience a lot and will definitely use it in the future! Kylie Eggen | LinkedIn Data Analyst: Ha Nguyen I am currently in the final stage of my Master’s degree in Responsible Digitalisation, focusing on the ethical and strategic use of data-driven technologies. With five years of experience using Excel for data analysis, I have developed a strong foundation in data handling and visualisation. This course allows me to expand my skills by learning to create interactive dashboards and generate actionable insights using Power BI. These competencies strengthen my ability to support responsible, data-driven decision-making in my future professional career. Ha Nguyen | LinkedIn Data Analyst: Rianne van Ee Hello! My name is Rianne, and I am currently in the process of completing my Master’s degree in Responsible Digitalisation. I chose this specialisation because I am very interested in new technologies and different perspectives. I am very interested in data analysis and learning about new software, so the DARBS course was very interesting to me. I am excited to apply my new skills in a professional environment. Rianne van Ee | LinkedIn Data Visualisation Consultant: Aya Torqui Hello! My name is Aya Torqui, and I am a Master’s student in Responsible Digitalisation at Radboud University. One of the reasons I chose this specialisation is my strong interest in how companies transform raw and sometimes ambiguous data into valuable business decisions. The DARBS course, therefore, provided the perfect opportunity for me to gain new and deeper insights into this process. In my role as a Data Visualisation Consultant, I developed new skills not only in designing visually attractive and interesting dashboards, but also in communicating a meaningful and coherent story through them. I am grateful for the opportunity to have developed these skills during the course, and I look forward to further broadening and strengthening them in my future career. Aya Torqui | LinkedIn Data Visualisation Consultant: Ting Yu Hi! My name is Ting Yu. I am currently a Master’s student of Civil Law and Responsible Digitalisation. I found the DARBS course quite interesting, and it was a whole new experience for me, because I learned that numbers are not boring. With a dashboard, it is possible to tell a story and help organisations. What I also really liked about this course was the creative side. Not only was it fun to play around with different charts and colour schemes for the dashboard, but also the video we had to make! I am curious to see what the future possibilities are. Ting Yu | LinkedIn Project Overview The goal of this project was to provide data-driven managerial recommendations to the fictitious company, VenturaGear. Eventually, it was our task to deliver a final report and a video blog in which we discussed their data and gave them recommendations on how to improve. Our focus was on supporting more effective customer targeting by identifying patterns and segments that could inform strategic decision-making. During the process, one of our main goals was to keep the data analysis responsible and ethical. Project Journey The course followed a nice structure, allowing us to learn about PowerBi gradually and expand our skills and knowledge over a couple of weeks. We started off by completing lab work. Every week we completed several online courses, and spent one lecture applying the knowledge from these courses in a lab work assignment. After a few weeks, we applied our knowledge in a milestone assignment. This was the first time we really applied our newfound skills in a practical manner. This was a really nice opportunity to see whether we could actually apply what we learned. This also came with a machine learning aspect. Even though we had a short introduction to the topic in class, none of us had worked with machine learning before. We were able to apply the knowledge we gathered about learning how to use a new system, like Power BI, on another system, in this case, machine learning. While we really struggled here at the start, after some time we figured it out and were able to work with the technology. This milestone assignment was the perfect preparation for the actual final assignment, which also had this machine learning aspect. We now knew where to start, what data to include, etc. We now also knew what to consider when looking at the ethical side of things. Like what information needs to be anonymised, or left out completely. Eventually, all our newfound knowledge was combined into making the final assignment and video blog. Technical Details Microsoft Power BI served as the main analytical environment throughout the project. We began by importing multiple CSV datasets into Power BI and preparing the data using Power Query. This involved cleaning duplicate records, correcting formatting inconsistencies, and transforming variables to ensure accurate calculations and reliable analysis. We then created a relational data model connecting key tables such as sales transactions, product information, customer behaviour, and sales reasons. Establishing these relationships allowed us to analyse data across multiple dimensions and generate deeper insights into customer activity and online purchasing patterns. Interactive dashboards were developed using Power BI’s visualisation tools, accessible colour themes, and slicers, allowing users to explore insights dynamically. Rather than presenting static results, the dashboard encouraged managers to interact with the data and investigate patterns independently. In addition to descriptive analytics, we applied a machine learning model (XGBoost) to identify factors influencing the sales of the top revenue-generating products. This introduced us to predictive analytics and highlighted the importance of feature selection, handling missing values, and critically interpreting model outputs. Combining visualisation with machine learning enabled us to move beyond reporting toward data-driven decision support. Results and Outcomes Before we could analyse our data, we ran into a few problems. Firstly, our unit prices seemed to be inflated in the dataset. The decimal was removed, leading to unreasonably high prices. To solve this, we recalculated the LineTotal, using the formula that can be seen below. Another problem we ran into was that we seemed to have a lot of missing data. We noticed this while looking at the sales reasons. A third of the data ended up blank. We ended up excluding the blank values, so that we were still able to analyse the remaining data. To really effectively target customers, we felt it was important to analyse the reasons people made their purchases. Through our analysis, we found that for VentureGear, the biggest contributor was price. We found that VenturaGear mainly made its sales in Australia. Lesson Learned Working with new systems The main lesson that we learned is how to start using a new system. The way in which we were taught how to use Power BI showed us a nice way of approaching new things. We believe this can be useful in other areas of our professional lives. 2. Data analysis Most of us were a little intimidated when we first heard that we were going to be analysing data through a new program. However, once we started, we noticed that when we all put our minds to it, it is quite manageable. We have all gained some understanding of data analysis and how to visualise this. 3. Teamwork A big factor during this project was teamwork. Our team was divided up into different roles. That meant that there was teamwork between the two data analysts and data visualisation consultants, but also between different roles. We found it to be really important to have teamwork between all these actors. We noticed that the further we got into the project, the smoother this interaction went. Collaboration and Teamwork On this project, we worked as a team. Our team consists of five people. Kylie Eggen was the Product Owner. Her role was to take care of the overview of the project. Ha Nguyen and Rianne van Ee were the Data Analysts for this project. Aya Torqui and Ting Yu were the Data Visualisation Consultants. We mostly stuck to our roles, but noticed that everything needed to happen in collaboration. So even though we were all mainly busy with our own roles, we were all involved in each other as well. We noticed this really helped in making the project a coherent whole. Future Development While this project generated valuable insights, there are several opportunities for further development. A potential next step would be integrating real-time data into Power BI. Expanding the dashboard with automated data refresh will allow managers to track performance continuously and respond more quickly to changing customer behaviour. Another area for future development involves extending the machine learning component. Rather than focusing only on identifying predictors of key revenue-generating products, the model could be expanded to include customer segmentation, such as grouping customers into categories like high-value customers, discount-sensitive buyers, or frequent online shoppers. In addition, the model could be developed further to support purchase prediction, enabling forecasts of seasonal demand, identifying customers likely to make repeat purchases, and determining which products are most preferred by specific customer groups. These enhancements would provide a more dynamic understanding of customer behaviour and support more targeted, data-driven decision-making. Incorporating more complete behavioural data or improving survey participation rates would also help reduce missing values and increase the reliability of insights. And finally, for future research, the organisation could consider introducing clear consent options on the web shop to help customers better understand what data is being collected. These options would also allow customers to choose what information they want to share, improving transparency and strengthening customer trust. Conclusion This project allowed us to learn how data analytics can help organisations make smarter and more responsible business decisions. Using Power BI, we transformed complex customer and sales data into clear, interactive insights that help managers better understand online behaviour, purchasing motivations, and performance trends. Beyond building technical skills, we also learned how important data quality, transparency, and ethical considerations are when working with sensitive customer data. Throughout the project, we discovered that data analysis is an iterative process that requires continuous evaluation, critical thinking, and careful interpretation of results. Most importantly, we realised that meaningful analytics is never an individual effort but a collaborative process, where teamwork and shared problem-solving play a key role in turning data into valuable insights. Overall, this project strengthened our ability to bridge technical analytics with responsible digitalisation principles. By combining business understanding, visualisation skills, and ethical awareness, we gained a clearer perspective on how tools like Power BI can enable professionals to create meaningful, data-driven solutions that are both impactful and responsible. Call to Action After experiencing this learning journey, we encourage you to engage with tools such as Power BI. As our teacher told us, ‘‘You are going to hit a wall.’’ That is exactly what happened to us, but pushing through those moments allowed us to create a deeper understanding and develop new skills. At the same time, we tried to stay aware of the ethical implications of working with data. During the project, we always ensured to stay transparent and responsible in our analysis. We encourage you to challenge yourself! Experiment with new technologies and step outside of your comfort zone. What we also think you should remember is that a strong analysis is not only dependent on technical skills, but it is also about staying transparent, responsible, and trustworthy. On behalf of group 3, thank you for taking the time to read our summary. Wehope it has been useful. Feel free to reach out for any remaining questions!
139Views1like0Comments🚀 Git-Driven Deployments for Microsoft Fabric Using GitHub Actions
👋 Introduction If you've been working with Microsoft Fabric, you've likely faced this question: "How do we promote Fabric items from DEV → QA → PROD reliably, consistently, and with proper governance?" Many teams default to the built-in Fabric Deployment Pipelines — and they work great for simpler scenarios. But what happens when your enterprise demands: 🔒 Centralized governance across all platforms (infra, app, and data) 📜 Full audit trail of every change tied to a Git commit ✅ Approval gates with reviewer-based promotion 🔑 Per-environment service principal isolation 🧩 Alignment with your existing DevOps standards That's exactly the problem we set out to solve. In this post, I'll walk you through a production-ready, enterprise-grade CI/CD solution for Microsoft Fabric using the fabric-cicd Python library and GitHub Actions — with zero dependency on Fabric Deployment Pipelines. 🎯 What Problem Are We Solving? Traditional Fabric promotion workflows often look like this: Step Method Problem Build in DEV workspace Fabric Portal UI ✅ Works fine Promote to QA Fabric Deployment Pipeline or manual copy ⚠️ No Git traceability Promote to PROD Fabric Deployment Pipeline with approval ⚠️ Separate governance model from app/infra CI/CD Rollback 🤷 Manual recreation ❌ No deterministic rollback path Audit "Who clicked what, when?" ❌ Limited trail The Core Issue Fabric Deployment Pipelines introduce a parallel governance model that's disconnected from how your platform and application teams already work. You end up with: 🔀 Two different promotion systems (GitHub Actions for apps, Fabric Pipelines for data) 🕳️ Governance blind spots between the two 😰 Cultural friction ("Why do data teams have a different process?") Our Approach: Git as the Single Source of Truth 📖 ┌─────────────┐ push to main ┌─────────────┐ │ Developer │ ──────────────────▶ │ GitHub │ │ commits to │ │ Actions │ │ Git repo │ │ Workflow │ └─────────────┘ └──────┬──────┘ │ ┌─────────────────┼─────────────────┐ ▼ ▼ ▼ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ 🟢 DEV │ │ 🟡 QA │ │ 🔴 PROD │ │ Auto │────▶│ Approval │────▶│ Approval │ │ Deploy │ │ Required │ │ Required │ └──────────┘ └──────────┘ └──────────┘ Every deployment originates from Git. Every promotion is traceable to a commit SHA. Every environment has its own approval gate. One pipeline model — across everything. 🏗️ Solution Architecture 📁 Repository Structure fabric-cicd-project/ │ ├── 📂 .github/ │ ├── 📂 workflows/ │ │ └── 📄 fabric-cicd.yml # GitHub Actions pipeline │ ├── 📄 CODEOWNERS # Review enforcement │ └── 📄 dependabot.yml # Automated dependency updates │ ├── 📂 config/ │ └── 📄 parameter.yml # Environment-specific parameterization │ ├── 📂 deploy/ │ ├── 📄 deploy_workspace.py # Main deployment entrypoint │ └── 📄 validate_repo.py # Pre-deployment validation │ ├── 📂 workspace/ # Fabric items (Git-integrated / PBIP) │ ├── 📄 .env.example # Environment variable template ├── 📄 .gitignore ├── 📄 ruff.toml # Python linting config ├── 📄 requirements.txt # Pinned dependencies ├── 📄 SECURITY.md # Vulnerability disclosure policy └── 📄 README.md 🔧 Key Components Component Purpose fabric-cicd Python library Deploys Fabric items from Git to workspaces (handles all Fabric API calls internally) deploy_workspace.py CLI entrypoint — authenticates, configures, deploys, logs parameter.yml Find-and-replace rules for environment-specific values (connections, lakehouse IDs, etc.) validate_repo.py Pre-flight checks — validates repo structure, parameter.yml presence, .platform files fabric-cicd.yml GitHub Actions workflow — orchestrates validate → DEV → QA → PROD ✨ Feature Deep Dive 1️⃣ Per-Environment Service Principal Isolation 🔐 Instead of a single shared service principal, each environment gets its own: DEV_TENANT_ID / DEV_CLIENT_ID / DEV_CLIENT_SECRET QA_TENANT_ID / QA_CLIENT_ID / QA_CLIENT_SECRET PROD_TENANT_ID / PROD_CLIENT_ID / PROD_CLIENT_SECRET Why this matters: 🛡️ Least-privilege access — the DEV SP can't touch PROD 🔍 Audit clarity — you know which identity deployed where 💥 Blast radius reduction — a compromised DEV secret doesn't affect PROD The deploy script automatically resolves the correct credentials based on TARGET_ENVIRONMENT, with fallback to shared FABRIC_* variables for simpler setups. 2️⃣ Environment-Specific Parameterization 🎛️ A single parameter.yml drives all environment differences: find_replace: - find: "DEV_Lakehouse" replace_with: DEV: "DEV_Lakehouse" QA: "QA_Lakehouse" PROD: "PROD_Lakehouse" - find: "dev-sql-server.database.windows.net" replace_with: DEV: "dev-sql-server.database.windows.net" QA: "qa-sql-server.database.windows.net" PROD: "prod-sql-server.database.windows.net" ✅ Same Git artifacts → different runtime bindings per environment ✅ No manual edits between promotions ✅ Easy to review in pull requests 3️⃣ Approval-Gated Promotions ✅ The GitHub Actions workflow uses GitHub Environments with reviewer requirements: Environment Trigger Approval 🟢 DEV Automatic on push to main None — deploys immediately 🟡 QA After successful DEV deploy ✅ Requires reviewer approval 🔴 PROD After successful QA deploy ✅ Requires reviewer approval Reviewers see a rich job summary in GitHub showing: 📌 Git commit SHA being deployed 🎯 Target workspace and environment 📦 Item types in scope ⏱️ Deployment duration ✅ / ❌ Final status 4️⃣ Pre-Deployment Validation 🔍 Before any deployment runs, a dedicated validate job checks: Check What It Does 📂 workspace exists Ensures Fabric items are present 📄 parameter.yml exists Ensures parameterization is configured 📄 .platform files present Validates Fabric Git integration metadata 🐍 ruff check deploy/ Lints Python code for syntax errors and bad imports If validation fails, no deployment runs — across any environment. 5️⃣ Full Git SHA Traceability 📜 Every deployment logs and surfaces the exact Git commit being deployed: Why this matters: 🔄 Rollback = git revert <sha> + push → pipeline redeploys previous state 🕵️ Audit = every PROD deployment tied to a specific commit, reviewer, and timestamp 🔀 Diff = git diff v1..v2 shows exactly what changed between deployments 6️⃣ Concurrency Control 🚦 concurrency: group: fabric-deploy-${{ github.ref }} cancel-in-progress: false Two rapid pushes to main won't cause parallel deployments fighting over the same workspace. The second run queues until the first completes. 7️⃣ Smart Path Filtering 🧠 paths-ignore: - "**.md" - "docs/**" - ".vscode/**" A README-only commit? A docs update? No deployment triggered. This saves runner minutes and avoids unnecessary approval requests for QA/PROD. 8️⃣ Retry Logic with Exponential Backoff 🔁 The deploy script wraps fabric-cicd calls with retry logic: Attempt 1 → fails (HTTP 429 rate limit) ⏳ Wait 5 seconds Attempt 2 → fails (HTTP 503 transient) ⏳ Wait 15 seconds Attempt 3 → succeeds ✅ Transient Fabric service issues don't break your pipeline — the deployment retries automatically. 9️⃣ Orphan Cleanup 🧹 Set CLEAN_ORPHANS=true and items that exist in the workspace but not in Git get removed: Workspace has: Notebook_A, Notebook_B, Notebook_C Git repo has: Notebook_A, Notebook_B → Notebook_C gets removed (orphan) This ensures your workspace exactly matches your Git state — no drift, no surprises. 🔟 Dependency Management with Dependabot 🤖 # .github/dependabot.yml updates: - package-ecosystem: "pip" schedule: interval: "weekly" - package-ecosystem: "github-actions" schedule: interval: "weekly" fabric-cicd, azure-identity, and GitHub Actions versions are automatically monitored. When updates are available, Dependabot opens a PR — keeping your pipeline secure and current. 1️⃣1️⃣ CODEOWNERS Enforcement 👥 # .github/CODEOWNERS /deploy/ @platform-team /config/ @platform-team /.github/workflows/ @platform-team Changes to deployment scripts, parameterization, or the workflow require review from the platform team. No one accidentally modifies the pipeline without oversight. 1️⃣2️⃣ Job Timeouts ⏱️ Job Timeout Validate 10 minutes Deploy (DEV/QA/PROD) 30 minutes A hung process won't burn 6 hours of runner time. It fails fast, alerts the team, and frees the runner. 1️⃣3️⃣ Security Policy 🛡️ A dedicated SECURITY.md provides: 📧 Responsible vulnerability disclosure process ⏰ 48-hour acknowledgement SLA 📋 Best practices for contributors (no secrets in code, least-privilege SPs, 90-day rotation) 🔄 The Complete Workflow Here's what happens end-to-end when a developer merges a PR: 1. 👨💻 Developer merges PR to main │ 2. 🔍 VALIDATE job runs │ ✅ Repo structure checks │ ✅ Python linting (ruff) │ ✅ parameter.yml validation │ 3. 🟢 DEPLOY-DEV job runs (automatic) │ 🔑 Authenticates with DEV SP │ 📦 Deploys all items to DEV workspace │ 📝 Logs commit SHA + summary │ 4. 🟡 DEPLOY-QA job waits for approval │ 👀 Reviewer checks job summary │ ✅ Reviewer approves │ 🔑 Authenticates with QA SP │ 📦 Deploys all items to QA workspace │ 5. 🔴 DEPLOY-PROD job waits for approval │ 👀 Reviewer checks job summary │ ✅ Reviewer approves │ 🔑 Authenticates with PROD SP │ 📦 Deploys all items to PROD workspace │ 6. 🎉 Done — all environments in sync with Git 🆚 Comparison: This Approach vs. Fabric Deployment Pipelines Capability Fabric Deployment Pipelines This Solution (fabric-cicd + GitHub Actions) Source of truth Workspace ✅ Git Promotion trigger UI click / API call ✅ Git push + approval Approval gates Fabric-native ✅ GitHub Environments (same as app teams) Audit trail Fabric activity log ✅ Git commits + GitHub Actions history Rollback Manual ✅ git revert + auto-redeploy Cross-platform governance Separate model ✅ Unified with infra/app CI/CD Parameterization Deployment rules ✅ parameter.yml (reviewable in PR) Secret management Fabric-managed ✅ GitHub Secrets + per-env SP isolation Drift detection Limited ✅ Orphan cleanup (CLEAN_ORPHANS=true) 🚀 Getting Started Prerequisites 3 Fabric workspaces (DEV, QA, PROD) Service principal(s) with Contributor role on each workspace GitHub repository with Actions enabled GitHub Environments configured (dev, qa, prod) Quick Setup # 1. Clone the repo git clone https://github.com/<your-org>/fabric-cicd-project.git # 2. Install dependencies pip install -r requirements.txt # 3. Copy and fill environment variables cp .env.example .env # 4. Run locally against DEV python deploy/deploy_workspace.py GitHub Actions Setup Create GitHub Environments: dev, qa (add reviewers), prod (add reviewers) Add secrets to each environment: DEV_TENANT_ID, DEV_CLIENT_ID, DEV_CLIENT_SECRET QA_TENANT_ID, QA_CLIENT_ID, QA_CLIENT_SECRET PROD_TENANT_ID, PROD_CLIENT_ID, PROD_CLIENT_SECRET DEV_WORKSPACE_ID, QA_WORKSPACE_ID, PROD_WORKSPACE_ID Push to main — the pipeline takes over! 🎉 💡 Lessons Learned After implementing this pattern across several engagements, here are the key takeaways: ✅ What Works Well Teams love the Git traceability once they experience a clean rollback Approval gates in GitHub feel natural to platform engineers Parameter.yml changes in PRs create great review conversations about environment differences Job summaries give reviewers confidence to approve without digging into logs ⚠️ Watch Out For Cultural resistance is the #1 blocker — invest in enablement, not just automation Fabric items with runtime state (data in lakehouses, refresh history) aren't captured in Git Secret rotation across 3+ environments needs process discipline (consider OIDC federated credentials) Run a "portal vs. pipeline" side-by-side demo early — it changes minds fast 🤝 For CSAs: Sharing This With Customers This solution is ideal for customers who: ☑️ Already use GitHub Actions for application or infrastructure CI/CD ☑️ Have governance requirements that demand Git-based audit trails ☑️ Operate multiple Fabric workspaces across environments ☑️ Want to standardize their promotion model across all workloads ☑️ Are moving from Power BI Premium to Fabric and want to modernize their DevOps practices 🗣️ Conversation Starters "How are you promoting Fabric items between environments today?" "Is your data team using the same CI/CD patterns as your app teams?" "If something goes wrong in production, how quickly can you roll back to the previous version?" 📚 Resources 📦 fabric-cicd on PyPI 📖 fabric-cicd Documentation 🐙 GitHub Actions Documentation 🏗️ Microsoft Fabric Git Integration 🌐Git Repository URL: vinod-soni-microsoft/FABRIC-CICD-PROJECT: Enterprise-grade CI/CD solution for Microsoft Fabric using fabric-cicd Python library and GitHub Actions. Git-driven deployments across DEV → QA → PROD with environment approval gates, per-environment service principal isolation, and parameterized promotion — no Fabric Deployment Pipelines required. 🏁 Conclusion The shift from UI-driven promotion to Git-driven CI/CD for Microsoft Fabric isn't just a technical upgrade — it's a governance and cultural alignment decision. By using fabric-cicd with GitHub Actions, you get: 📖 One source of truth (Git) 🔄 One promotion model (GitHub Actions) ✅ One approval process (GitHub Environments) 🔍 One audit trail (Git history + Actions logs) 🔐 One security model (GitHub Secrets + per-env SPs) No parallel governance. No hidden drift. No "who clicked what in the portal." Just Git, code, and confidence. 💪 Have questions or want to share your experience? Drop a comment below — I'd love to hear how your team is approaching Fabric CI/CD! 👇1.4KViews0likes0CommentsM365 Roadmap Management
Wondering if others have tips & tricks on how they stay up to date with the https://www.microsoft.com/en-us/microsoft-365/roadmap? I find it tedious to stay on top of all the features being announced, switching launch phases, moving target dates, and so on. I was hoping to use the RSS feed in a PowerBI Dashboard or find a solution in Planner similar to syncing the Admin Centre messages, and after some quick searches online could not find an example of anyone doing something similar. I was hoping any members of this community may be able to shed some light on how they approach the roadmap site and what tools if any they use to manage the constant influx of information? TIA!1.6KViews4likes9CommentsData-driven Analytics for Responsible Business Solutions, a Power BI introduction course:
Want to gain inside on how students at Radboud University are introduced, in a praticle manner, to Power BI? Check out our learning process and final project. For a summary of our final solution watch our Video Blog and stick around till the end for some "wise words"!2.6KViews0likes1CommentSeeking On-Premise Power BI P1 Capacity (Academic) or Suitable Alternative for Government Customer
Hello everyone, We are working with a government agency customer who requires Power BI P1 Capacity (On-Premise) with Academic classification. Unfortunately, we’ve been informed by Microsoft that this product has been discontinued and is now only available through Azure. However, due to strict data security policies, our customer is unable to use Azure as they cannot send data to external servers. We are looking for guidance on the following: Is there a direct successor to the Power BI P1 Capacity (On-Premise) with Academic classification that would meet these requirements? Are there any similar on-premise solutions for Power BI that could work for this customer? Any suggestions on how we can provide a suitable solution would be greatly appreciated! Thanks in advance for your assistance!161Views0likes2CommentsHow to use Semantic Kernel Bot in-a-box to interact with data using natural language & AI
We are thrilled to discuss two new features for Semantic Kernel's AI-powered assistant - SQLPlugin and UploadPlugin. SQLPlugin uses SQL to extract insights that can transform the way professionals interact with data, while UploadPlugin lets users upload documents and retrieve knowledge6.5KViews1like1CommentPart 4 of Microsoft Fabric Webcast Series: Panel Discussion and AI Use Case for Fabric
Join Microsoft and RSM as they take you through the fundamentals, value, benefits, and licensing considerations as we move into the world of Microsoft Fabric. These sessions are geared towards IT leaders, support staff, and data and reporting enthusiasts. Join us in our last panel session where you can interact with experts from Microsoft and RSM. Part 1: Microsoft Fabric: Introduction and Overview - Feb 8, 12 EST Intro to Fabric: What is Fabric, and what does it offer? Explain the difference between other PaaS offerings for Data and AI solutions. Part 2: Empowering D365 Customer with Fabric - Feb 13, 12 EST How Fabric is useful for all Dynamics365 customers: F&O, CE, etc. Part 3: Upstream / Downstream Impacts of Power BI and Fabric - Feb 22, 12 EST Power BI and Fabric; How the licensing works, what’s changed vs unchanged; Benefits of Direct Lake approach, etc. Part 4: Panel Discussion and AI Use Case for Fabric - Mar 14, 12 EST AI-specific use case; Solution in Fabric walk-through; panel discussion Register for this event here Note: You must register at this link to receive information on joining the session.1.2KViews0likes0CommentsRevolutionizing Road Safety: Power Platform AI-Powered Vehicle Identification & Verification System
Explore our groundbreaking project leveraging AI Builder and Microsoft technologies for vehicle and driver identification. Enhance road safety, prevent theft, and streamline verification processes with our innovative system. Join us in exploring the potential of AI in traffic management.2.8KViews4likes1CommentDiscover the Future of Data Engineering with Microsoft Fabric for Technical Students & Entrepreneurs
Microsoft Fabric is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. This makes it an ideal platform for technical students and entrepreneurial developers looking to streamline their data engineering and analytics workflows.6.2KViews4likes1Comment
