Oracle Database@Azure is an Oracle database service running on Oracle Cloud Infrastructure (OCI), collocated in Microsoft data centers. This ensures that the Oracle Database@Azure service has the fastest possible access to Azure resources and applications. The solution is intended to support the migration of Oracle database workloads to Azure, where customers can integrate and innovate with the breadth of Microsoft Cloud services. For more information and to gain a better understanding of Oracle Database@Azure please visit Overview - Oracle Database@Azure | Microsoft Learn
The current Oracle Database@Azure service has a network limitation where it cannot respond to network connections outside of its Azure virtual network (VNet ) when it is expected to route through a firewall. This limitation places constraints on extending integration to Azure services not located within the same Vnet. This issue also impacts network communication from on-premises that need to connect to the Oracle Database@Azure service.
To address this network limitation, the recommended solution is to deploy a Network Virtual Appliance (NVA) within the Oracle Database@Azure VNet. While Microsoft and Oracle are working together to develop an update to the Azure platform that will eliminate this limitation, customers will need to follow this design pattern until the official rollout of the update.
The NVA consists of a Linux virtual machine (VM) and any supported distribution on Azure can be used. The NVA referenced in this article is not a traditional firewall, but a VM acting as a router with IP forwarding enabled and not intended to be an enterprise-scale Firewall NVA. This solution is only expected to help customers bridge the gap until the jointly engineered design pattern is available in all Azure regions.
The deployment of the NVA helps solve the specific scenarios outlined below:
Additional details on supported network topologies can be found within the following article Network planning for Oracle Database@Azure | Microsoft Learn
This article's scope will review a network scenario within an Azure Landing Zone requiring an NVA. The deployment steps of the NVA and other ancillary steps required to complete the end-to-end implementation are included. This article does not cover the hybrid connectivity from on-premises to Azure. That scenario will be covered in a later article; however, both share the same method of using User Defined Routes (UDR).
The Azure Landing Zone consists of Hub and Spoke architecture where the application layer is hosted in a Vnet specific for the application front end services, such as web servers. The Oracle Database@Azure is deployed in a separate dedicated Vnet for data. The goal is to provide bidirectional network connectivity between the application layer and the data layer.
The steps provided in this article should be followed in the designated order to ensure the expected results. Please consult with either your Microsoft or Oracle representative if you have specific questions related to your environment.
Note: At the time this article was published, Azure Firewall is currently not supported in this scenario. Third-party NVA’s native support is scheduled for 2024, but subject to change. Third-Party NVA's require this workaround to satisfy the above-mentioned scenario to support network communication until these features are fully implemented on Azure.
Create a Linux VM in Azure as an NVA
Set up a Linux VM (using any supported distributions on Azure) in the desired resource group and region as the Oracle Database@Azure using your deployment method of choice (for example Azure portal, Azure PowerShell, or Azure CLI). As a security recommendation, be sure to leverage Secure Shell (SSH) public/private keys to ensure secure communication.
Ensure the VM is in the same Vnet, but on a separate subnet from the Oracle Database@Azure delegated subnet as well as the dedicated Oracle backup subnet if it has been deployed
Note: Sizing is very much driven by the actual traffic pattern. Consider how much traffic (volume) packets per second are required to support the implementation. Starting with a 2-core general-purpose VM (D2s_v5 with 2 vCPUs) and 8 GiB (gibibytes) of memory including accelerated networking which can be used to gauge initial performance. High storage/IOPS performance SKUs are not necessary for this use case.
As part of the deployment and monitoring strategy please consult Welcome | Azure Monitor Baseline Alerts for the proper Azure Monitor counters that should be enabled against the NVA to ensure performance and availability.
Enable IP Forwarding on the VM's NIC (Network Interface Cards)
Enable IP Forwarding at the Operating System level
We now need to implement iptables rules to route traffic properly through the NVA. When using iptables after a reboot Linux systems will lose their iptables rules. In order to avoid that we will install some packages and make some configurations. In the first example we will use either an Ubuntu or Debian Linux distribution. We will only be using IPv4 with the following changes on the Linux system listed in this article.
Ubuntu / Debian Linux system
Ensure that the local firewall on the NVA is enabled or set to not block traffic. First start by enabling iptables by running the following command sudo systemctl enable iptables and hit enter. Then type sudo systemctl start iptables and hit enter.
To list the current iptables rules by running the following command sudo iptables -L and hit enter. This will list any possible firewall rules.
Note: If there are rules disable them with the following command sudo iptables -F and hit enter.
We need to install a package called iptables-persistent by typing the following command:
On a Ubuntu system type sudo apt install iptables-persistent and hit enter.
On a Debian system type sudo apt-get install iptables-persistent and hit enter.
Make sure services are enabled on Debian or Ubuntu using the systemctl command:
sudo systemctl is-enabled netfilter-persistent.service hit enter.
If not enabled type the following command:
sudo systemctl enable netfilter-persistent.service hit enter.
Get the status of the service by running the following command:
sudo systemctl status netfilter-persistent.service hit enter
Enter the following commands line by line and hit enter for each:
sudo iptables -t nat -A POSTROUTING -j MASQUERADE
sudo iptables -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -j ACCEPT
Validate the iptables rules are in place by typing sudo iptables -L and hit enter.
The iptables rules applied will be saved and loaded if the system reboots.
In the second example we will use either a RedHat Enterprise Linux System (RHEL), Fedora, and AlmaLinux. The system commands are similar for the following Linux distributions.
RHEL/Fedora/AlmaLinux
Type the following commands line by line and hit enter for each to disable firewalld:
sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service
sudo systemctl mask firewalld.service
Next, we must install the iptables-services package by either using the native yum or dnf package management commands.
The following example uses yum. Type each the following command line by line followed by hitting enter for each:
sudo yum install iptables-services
sudo systemctl enable iptables
sudo systemctl enable ip6tables
sudo systemctl status iptables
If we use dnf enter each line by line and hit enter :
sudo dnf install iptables-services
sudo systemctl enable iptables
sudo systemctl enable ip6tables
sudo systemctl status iptables
Once the service is installed, you can configure the /etc/sysconfig/iptables file for IPv4. Any rules added to this file makes them persistent. You can use your favorite editor vi, vim, or nano to edit the file. Add the following line by line and save the file once complete.
sudo iptables -t nat -A POSTROUTING -j MASQUERADE
sudo iptables -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -j ACCEPT
Next, we need to load the changes that were just made by typing the following command:
sudo systemctl restart iptables and then hit enter.
Ensure that the Network Security Group (NSG) on the NVA is allowing all traffic from the application Vnet and the Oracle Database@Azure delegated subnet.
Configure Route Tables
Oracle Database@Azure Vnet (Spoke)
Important: Ensure in the configuration of the route table that all route propagation is disabled. This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.
Configure Route Tables for the NVA in the Oracle Database @azure Vnet
Important: Ensure in the configuration of the route table that all route propagation is disabled. This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.
Route Configuration Application Tier
Route to Hub NVA
Important: Ensure in the configuration of the route table that all route propagation is disabled. This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.
Route Configuration Hub VNet
Route to Local NVA:
Important: Ensure in the configuration of the route table that all route propagation is disabled. This setup ensures that all traffic to and from the Oracle Database is forced through your local NVA.
When finished the implementation network flow and environment should match the following diagram:
The next step is to start testing by initiating a connection from the application servers. Make sure the proper components have been installed on the application servers to connect to the Oracle Database@Azure before validating connectivity. Validate that the application servers can connect to the Oracle Database@Azure.
If you need to troubleshoot deploy a test Linux vm on the application subnet to test connectivity. Install the mtr package as a tool on the Linux test vm. Do not rely on ping (ICMP) to troubleshoot as this will not properly test connectivity within Azure. An example of the command using mtr would be the following: sudo mtr -T -n -P 1521 10.2.0.4. This example starts a trace attempting to connect to the database without using ICMP. The network port of 1521 is selected which the database listens on for connections. Review the route tables and IP addresses were entered correctly if a problem is identified. If the initial tests are successful, you have implemented this solution correctly.
Please visit the Microsoft Cloud Adoption Framework (CAF ) Introduction to Oracle on Azure adoption scenarios - Cloud Adoption Framework | Microsoft Learn
Authors
Moises Gomez Cortez
Technical Editor and Content Contributor
Anthony de Lagarde, Erik Munson
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.