monitoring
74 TopicsDrasi is Fluent in GQL: Integrating the New Graph Query Standard
Drasi , the open-source Rust data change processing platform, simplifies the creation of change-driven systems through continuous queries, reactions, and clearly defined change semantics. Continuous queries enable developers to specify precisely what data changes matter, track these changes in real-time, and react immediately as changes occur. Unlike traditional database queries, which provide static snapshots of data, continuous queries constantly maintain an up-to-date view of query results, automatically notifying reactions of precise additions, updates, and deletions to the result set as they happen. To date, Drasi has supported only openCypher for writing continuous queries; openCypher is a powerful declarative graph query language. Recently, Drasi has added support for Graph Query Language (GQL), the new international ISO standard for querying property graphs. In this article, we describe what GQL means for writing continuous queries and describe how we implemented GQL Support. A Standardized Future for Graph Queries GQL is the first officially standardized database language since SQL in 1987. Published by ISO/IEC in April 2024, it defines a global specification for querying property graphs. Unlike the relational model that structures data into tables, the property graph model structures data inside of the database as a graph. With GQL support, Drasi enables users to benefit from a query language that we expect to be widely adopted across the database industry, ensuring compatibility with future standards in graph querying. Drasi continues to support openCypher, allowing users to select the query language that best fits their requirements and existing knowledge. With the introduction of GQL, Drasi users can now write continuous queries using the new international standard. Example GQL Continuous Query: Counting Unique Messages Event-driven architectures traditionally involve overhead for parsing event payloads, filtering irrelevant data, and managing contextual state to identify precise data transitions. Drasi eliminates much of this complexity through continuous queries, which maintain accurate real-time views of data and generate change notifications. Imagine a simple database with a message table containing the text of each message. Suppose you want to know, in real-time, how many times the same message has been sent. Traditionally, addressing these types of scenarios involves polling databases at set intervals, using middleware to detect state changes, and developing custom logic to handle reactions. It could also mean setting up change data capture (CDC) to feed a message broker and process events through a stream processing system. These methods can quickly become complex and difficult, especially when handling numerous or more sophisticated scenarios. Drasi simplifies this process by employing a change-driven architecture. Rather than relying on polling or other methods, Drasi uses continuous queries that actively monitor data for specific conditions. The moment a specified condition is met or changes, Drasi proactively sends notifications, ensuring real-time responsiveness. The following example shows the continuous query in GQL that counts the frequency of each unique message: MATCH (m:Message) LET Message = m.Message RETURN Message, count(Message) AS Frequency You can explore this example in the Drasi Getting Started tutorial. Key Features of the GQL Language OpenCypher had a significant influence on GQL and there are many things in common between the two languages; however, there are also some important differences. A new statement introduced in GQL is NEXT, which enables linear composition of multiple statements. It forms a pipeline where each subsequent statement receives the working table resulting from the previous statement. One application for NEXT is the ability to filter results after an aggregation. For example, to find colors associated with more than five vehicles, the following query can be used: MATCH (v:Vehicle) RETURN v.color AS color, count(v) AS vehicle_count NEXT FILTER vehicle_count > 5 RETURN color, vehicle_count Equivalent openCypher: MATCH (v:Vehicle) WITH v.color AS color, count(v) AS vehicle_count WHERE vehicle_count > 5 RETURN color, vehicle_count GQL introduces additional clauses and statements: LET, YIELD, and FILTER. The LET statement allows users to define new variables or computed fields for every row in the current working table. Each LET expression can reference existing columns in scope, and the resulting variables are added as new columns. Example: MATCH (v:Vehicle) LET makeAndModel = v.make + ' ' + v.model RETURN makeAndModel, v.year Equivalent openCypher: MATCH (v:Vehicle) WITH v, v.make + ' ' + v.model AS makeAndModel RETURN makeAndModel, v.year The YIELD clause projects and optionally renames columns from the working table, limiting the set of columns available in scope. Only specified columns remain in scope after YIELD. Example: MATCH (v:Vehicle)-[e:LOCATED_IN]->(z:Zone) YIELD v.color AS vehicleColor, z.type AS location RETURN vehicleColor, location FILTER is a standalone statement that removes rows from the current working table based on a specified condition. While GQL still supports a WHERE clause for filtering during the MATCH phase, the FILTER statement provides additional flexibility by allowing results to be filtered after previous steps. It does not create a new table; instead, it updates the working table. Unlike openCypher’s WHERE clause, which is tied to a MATCH or WITH, GQL's FILTER can be applied independently at various points in the query pipeline. Example: MATCH (n:Person) FILTER n.age > 30 RETURN n.name, n.age GQL also provides control in how aggregations are grouped. The GROUP BY clause can be used to explicitly define the grouping keys, ensuring results are aggregated exactly as intended. MATCH (v:Vehicle)-[:LOCATED_IN]->(z:Zone) RETURN z.type AS zone_type, v.color AS vehicle_color, count(v) AS vehicle_count GROUP BY zone_type, vehicle_color If the GROUP BY clause is omitted, GQL defaults to an implicit grouping behavior, having all non-aggregated columns in the RETURN clause automatically used as the grouping keys. While many of the core concepts, like pattern matching, projections, and filtering, will feel familiar to openCypher users, GQL’s statements are distinct in their usage. Supporting these differences in Drasi required design changes, described in the following section, that led to multiple query languages within the platform. Refactoring Drasi for Multi-Language Query Support Instead of migrating Drasi from openCypher to GQL, we saw this as an opportunity to address multi-language support in the system. Drasi's initial architecture was designed exclusively for openCypher. In this model, the query parser generated an Abstract Syntax Tree (AST) for openCypher. The execution engine was designed to process this AST format, executing the query it represented to produce the resulting dataset. Built‑in functions (such as toUpper() for string case conversion) followed openCypher naming and were implemented within the same module as the engine. This created an architectural challenge for supporting additional query languages, such as GQL. To enable multi-language support, the system was refactored to separate the parsing, execution, and function management. A key insight was that the existing AST structure, originally created for openCypher, was flexible enough to be used for GQL. Although GQL and openCypher are different languages, their core operations, matching patterns, filtering data, and projecting results, could be represented by this AST. The diagram shows the dependencies within this new architecture, highlighting the separation and interaction between the components. The language-specific function modules for openCypher and GQL provide the functions to the execution engine. The language-specific parsers for openCypher and GQL produce an AST, and the execution engine operates on this AST. The engine only needs to understand this AST format, making it language-agnostic. The AST structure is based on a sequence of QueryPart objects. Each QueryPart represents a distinct stage of the query, containing clauses for matching, filtering, and returning data. The execution engine processes these QueryParts sequentially. pub struct QueryPart { pub match_clauses: Vec<MatchClause>, pub where_clauses: Vec<Expression>, pub return_clause: ProjectionClause, } The process begins when a query is submitted in either GQL or openCypher. The query is first directed to its corresponding language-specific parser, which handles the lexical analysis and transforms the raw query string into the standardized AST. When data changes occur in the graph, the execution engine uses the MATCH clauses from the first QueryPart to find affected graph patterns and captures the matched data. This matched data then flows through each QueryPart in sequence. The WHERE portion of the AST filters out data that does not meet the specified conditions. The RETURN portion transforms the data by selecting specific fields, computing new values, or performing aggregations. Each QueryPart's output becomes the next one's input, creating a pipeline that incrementally produces query results as the underlying graph changes. To support functions from multiple languages in this AST, we introduced a function registry to abstract a function's name from its implementation. Function names can differ (e.g., toUpper() in openCypher versus Upper() in GQL). For any given query, language-specific modules populate this registry, mapping each function name to its corresponding behavior. Functions with shared logic can be implemented once in the engine and registered under multiple names in specific function crates, preventing code duplication. Meanwhile, language-exclusive functions can be registered and implemented separately within their respective modules. When processing an AST, the engine uses the registry attached to that query to resolve and execute the correct function. The separate function modules allow developers to introduce their own function registry, supporting custom implementations or names. Conclusion By adding support for GQL, Drasi now offers developers a choice between openCypher and the new GQL standard. This capability ensures that teams can use the syntax that best fits their skills and project requirements. In addition, the architectural changes set the foundation for additional query languages. You can check out the code on our GitHub organization, dig into the technical details on our documentation site, and join our developer community on Discord.269Views1like0CommentsMysterious Nightly CPU Spikes on App Service Plans (22:00-10:00) Despite Low Traffic
For several months now, all of our Azure App Service Plans have been experiencing consistent CPU spikes during off-peak hours, specifically from approximately 22:00 PM to 10:00 AM. This pattern is particularly puzzling because: This timeframe corresponds to our lowest traffic and activity periods We've conducted thorough investigations but haven't identified the root cause No scheduled timer functions or planned jobs are running during these hours that could explain the spikes What we've already checked: Application logs and metrics Scheduled functions and background jobs Traffic patterns and user activity Has anyone encountered similar behavior? What could be causing these nightly CPU spikes on otherwise idle App Service Plans?95Views0likes2Comments👉 Securing Azure Workloads: From Identity to Monitoring
Hi everyone 👋 — following up on my journey, I want to share how I approach end-to-end security in Azure workloads. - Identity First – Microsoft Entra ID for Conditional Access, PIM, and risk-based policies. - Workload Security – Defender for Cloud to monitor compliance and surface misconfigurations. - Visibility & Monitoring – Log Analytics + Sentinel to bring everything under one pane of glass. Through my projects, I’ve been simulating enterprise scenarios where security isn’t just a checklist — it’s integrated into the architecture. Coming soon: - A lab demo showing how Defender for Cloud highlights insecure configurations. - A real-world style Conditional Access baseline for Azure workloads. Excited to hear how others in this community are securing their Azure environments! #Azure | #AzureSecurity | #MicrosoftLearn | #ZeroTrust | #PerparimLabs50Views0likes0CommentsExpose AVD registration status on Azure VM objects
In enterprise environments, it's difficult to determine whether a VM is successfully registered with Azure Virtual Desktop (AVD) without querying the host pool or relying on indirect signals. Please consider surfacing the AVD registration status (e.g., Registered, Not Registered, Pending) directly on the Azure VM object, accessible via: Azure Portal Azure Resource Graph Azure PowerShell / CLI REST API This would simplify automation, monitoring, and remediation workflows across large-scale deployments. Thanks for considering this! Vu30Views0likes0CommentsBuilt a Real-Time Azure AI + AKS + DevOps Project – Looking for Feedback
Hi everyone, I recently completed a real-time project using Microsoft Azure services to build a cloud-native healthcare monitoring system. The key services used include: Azure AI (Cognitive Services, OpenAI) Azure Kubernetes Service (AKS) Azure DevOps and GitHub Actions Azure Monitor, Key Vault, API Management, and others The project focuses on real-time health risk prediction using simulated sensor data. It's built with containerized microservices, infrastructure as code, and end-to-end automation. GitHub link (with source code and documentation): https://github.com/kavin3021/AI-Driven-Predictive-Healthcare-Ecosystem I would really appreciate your feedback or suggestions to improve the solution. Thank you!128Views0likes2CommentsScaling Smart with Azure: Architecture That Works
Hi Tech Community! I’m Zainab, currently based in Abu Dhabi and serving as Vice President of Finance & HR at Hoddz Trends LLC a global tech solutions company headquartered in Arkansas, USA. While I lead on strategy, people, and financials, I also roll up my sleeves when it comes to tech innovation. In this discussion, I want to explore the real-world challenges of scaling systems with Microsoft Azure. From choosing the right architecture to optimizing performance and cost, I’ll be sharing insights drawn from experience and I’d love to hear yours too. Whether you're building from scratch, migrating legacy systems, or refining deployments, let’s talk about what actually works.84Views0likes1CommentResponding to the Absence of Change in Change-Driven Systems
Drasi, an open-source Data Change Processing Platform, simplifies the creation of change-driven systems because it provides a consistent way of thinking about, detecting, and reacting to change. Sometimes, you need to detect and react when data doesn’t change. Drasi provides an approach to detecting the absence of change and makes building such systems easy. When there is no change In the world of change-driven systems, certain scenarios challenge conventional response mechanisms. Among these challenges is the subtle yet complex problem of responding to the absence of change rather than the arrival of an individual event. This nuanced requirement often arises in monitoring systems, IoT devices, and other applications where a condition must persist for a given duration to warrant a reaction. Consider an example: a freezer’s temperature sensor emits an event when the temperature changes, and at one point, the temperature registers above 32°F. While this measurement is significant, the system should only react if the freezer’s temperature remains above 32°F for at least 15 minutes. There is, however, no explicit event that confirms this persistence. The difficulty lies in establishing a reliable mechanism to track and respond to sustained states without direct event notification of their continuity. We’ll describe Polling and Timers, which are traditional solutions, and then describe how Drasi solves this problem. Traditional solutions Polling To solve this, polling often serves as a standard approach. In this method, the system would periodically scan the last 15 minutes of data to determine if the temperature was above the threshold continuously for 15 minutes. This approach is inherently limited by its non-real-time nature, as the system only identifies qualifying conditions during scheduled intervals. Consequently, there may be delays in detecting and responding to critical conditions, especially in scenarios where timely action is paramount. Furthermore, polling can lead to increased computational overhead, especially in large-scale systems, as it requires frequent queries to ensure no conditions are missed. Timers An alternative to polling involves leveraging the initial event that triggers a state change to start a timer. In this approach, the system initiates a countdown the moment a condition arises, such as the temperature rising above 32°F. If the condition persists for the defined threshold (15 minutes for the freezer), the system initiates the required response. Conversely, if the condition is resolved before the timer expires, the timer is canceled. While this approach addresses some limitations of polling by introducing real-time responsiveness, it introduces its own complexities and overhead. Managing timers at scale is not trivial, particularly in distributed systems with thousands of tracked conditions. Each timer must be initiated, monitored, and terminated. To implement initiation, monitoring, and termination effectively, a specialized timer management service must be built or adopted. This service needs to handle the management of timers, ensure high reliability, and scale to volumes. Ensuring failover and recovery mechanisms for timers, particularly in distributed systems, introduces further complexity. For example, if a node managing active timers fails, the system must ensure that no timer is lost or incorrectly reset, which often requires sophisticated state replication and recovery strategies. Ultimately, this timer-based approach necessitates the deployment and management of custom-built services. These services bring inherent costs not only in terms of development and maintenance but also in operational overhead. As such, while this method can deliver superior responsiveness compared to polling, its implementation comes with a steep tradeoff in system complexity and costs. Drasi to detect the absence of change Central to Drasi is the Continuous Query Pattern, implemented using the openCypher graph query language. A Continuous Query runs perpetually, fed by change logs from one or more data sources, maintaining the current query result set and generating notifications when those results change. Unlike producer-defined event streams, this pattern empowers consumers to specify the relevant properties and their relationships using a familiar database-type query. Drasi solves the “absence of change” problem through a suite of “future” functions, within a Continuous Query. Verifying Sustained Conditions with Drasi: A Freezer Monitoring Example The freezer example can be expressed as a simple openCypher query, using the “trueFor” function unique to Drasi. The “trueFor” function takes an expression that must evaluate to “true” for the duration specified, if this expression holds true for the entire length of the duration specified, the WHERE clause will resolve to true and only then will a notification be emitted that a new item has been added to the result set. MATCH (f:Freezer) WHERE drasi.trueFor(f.temp > 32, duration( { minutes: 15 } )) RETURN f.id AS id, f.temp AS temp Under the hood To achieve this, internally Drasi leverages a specialized priority queue with unique access patterns that is ordered by future timestamps. When the WHERE clause is first evaluated, some metadata about the associated graph elements is pushed into the priority queue, this metadata can later be used to surgically re-evaluate a given condition using cached indexes. The position in the queue will be determined by the future timestamp at which the condition can be re-evaluated. The "trueFor" function takes a condition and a duration of how long the condition needs to be true. The function will only return ‘true’ when the condition has held true continuously for the specified duration. Let's consider the freezer example with the following temperature changes: At 12:00 - The freezer temp is 35 At 12:01 - The freezer temp is 36 At 12:02 - The freezer temp is 30 At 12:14 - The freezer temp is 34 Given the value of 30 at 12:02 and the value of 34 at 12:14, the alert should not fire until 12:29. To achieve this, the time at which the freezer crosses 32 degrees needs to be tracked so that it can be determined if the condition has been true for at least 15 minutes. When the query engine first evaluates this function, it will test the “temp > 32” expression passed to it. If the condition resolves true, then the element metadata is added to the queue, only if it is not already on the queue. If the condition resolves false, and if that metadata is already on the queue, it is removed from the queue, because continuity has been broken. If that metadata reaches the head of the queue and its timestamp elapses, the element is reprocessed through the query, and the function returns a “true” result which triggers a reaction. The priority queue would look as follows for each change (where "f1" represents the metadata for "Freezer 1"): Future-Time Evaluation with Drasi: A Payment Authorization Example The continuity feature of the “trueFor” function may not be desired in every use case. Take another example: an online payment system, where a payment is first authorized and the customer funds are put on hold to secure an order. If the order is not completed within fifteen minutes, then the funds must be released, and the reserved inventory must be made available again. This example can also be expressed as a simple openCypher query, using the “trueLater” function. This function takes an expression that must evaluate to “true” at a given future time. If it evaluates to “true” at the given future time, the WHERE clause will resolve to true and only then will a notification be emitted that a new item has been added to the result set. MATCH (p:Payment) WHERE drasi.trueLater(p.status = ‘auth’, p.exipres_at) RETURN p.id, p.amount, p.customer Under the hood When the WHERE clause is first evaluated, if the timestamp provided to the function is in the future, the function will push the element metadata to the priority queue and return an "AWAITING" result, which is the equivalent of false, and in the payment example the WHERE clause filters out this potential result. If the provided timestamp is in the past, the function will return the result of evaluating the condition. Try out the “Absence of Change” tutorial to see these functions in action. Conclusion Detecting the absence of change in change-driven systems is a subtle yet critical challenge, often complicated by the inefficiencies of traditional approaches like polling or the complexities of managing timers at scale. Drasi revolutionizes this process with the Continuous Query Pattern and powerful functions like "trueFor" and "trueLater", enabling developers to build responsive, scalable systems with ease. By leveraging familiar openCypher queries, Drasi eliminates the need for cumbersome custom services, delivering real-time reactions with minimal overhead. Drasi offers a streamlined, elegant solution. Ready to simplify your change-driven systems? Explore Drasi today, experiment with its Continuous Queries, and join the conversation to share your insights! Further reading: Reference | Drasi Docs Join the Drasi community If you're a developer interested in solving real-world problems, exploring modern architectures, or just looking to contribute to something meaningful, we’d love to have you onboard. You can check out the code on our GitHub organization, dig into the technical details on our documentation site, and join our developer community on Discord.206Views2likes0CommentsComparision on Azure Cloud Sync and Traditional Entra connect Sync.
Introduction In the evolving landscape of identity management, organizations face a critical decision when integrating their on-premises Active Directory (AD) with Microsoft Entra ID (formerly Azure AD). Two primary tools are available for this synchronization: Traditional Entra Connect Sync (formerly Azure AD Connect) Azure Cloud Sync While both serve the same fundamental purpose, bridging on-prem AD with cloud identity, they differ significantly in architecture, capabilities, and ideal use cases. Architecture & Setup Entra Connect Sync is a heavyweight solution. It installs a full synchronization engine on a Windows Server, often backed by SQL Server. This setup gives administrators deep control over sync rules, attribute flows, and filtering. Azure Cloud Sync, on the other hand, is lightweight. It uses a cloud-managed agent installed on-premises, removing the need for SQL Server or complex infrastructure. The agent communicates with Microsoft Entra ID, and most configurations are handled in the cloud portal. For organizations with complex hybrid setups (e.g., Exchange hybrid, device management), is Cloud Sync too limited?470Views1like2Comments