CommuniGate Pro
Version 6.3
 

Real-Time Processing in Clusters

This section explains how Real-Time operations work in the CommuniGate Pro Cluster environment:

Real-Time Tasks

The CommuniGate Pro Real-Time Tasks communicate by sending events to task handlers. A task handler is a reference object describing a Real-Time Task, a Session, or a Real-Time Signal. In a CommuniGate Pro Cluster environment, the Task handler contains the address of the Cluster Member server the referenced Real-Time Task, Session, or Signal is running on.

When an event should be delivered to a different cluster member, it is delivered using the inter-Cluster CLI/API. The event recipient can reply, using the sender task handler, and again the inter-Cluster CLI/API will be used to deliver the reply event.

Real-Time Application Tasks usually employ Media channels. To be able to exchange media with external entities, Real-Time Tasks should run only on those Cluster members that have direct access to the Internet.

XIMSS Call Legs

When a XIMSS session initiates a call, it creates a Call Leg object. These Call Leg objects manage XIMSS user Media channels and they should be able to exchange media with external entities, so they should run only on those Cluster members that have direct access to the Internet.

When a Real-Time Signal component directs an incoming call to a XIMSS session, it creates a Call Leg object on the same Cluster member processing this incoming call Signal request. This Call Leg object is then "attached" to the XIMSS session (which is running on some backend Server and this can be running on a different Cluster member).

When an XIMSS session and its Call Leg are running on different Cluster members, they communicate via special events, which are delivered using the inter-Cluster CLI/API.

Signals

Real-Time Signal processing results in DNS Resolver , SIP, and XMPP requests.
When a Cluster is configured so that only the frontend servers can access the Internet, Real-Time Signal processing should take place on those frontend servers only.

Even if the Real-Time applications and Call Legs are configured to run on frontend servers only, Real-Time Signals can be generated on other cluster members, too: XIMSS and XMPP sessions, Automated Rules, and other components can send Instant Messages, Event packages generate notification Signals, etc.

When a Real-Time Signal is running on a frontend server, it uses inter-Cluster CLI/API to retrieve Account data (such as SIP registration), or to perform requested actions (to deliver SUBSCRIBE or XMPP IQ request, or to initiate a call).

Configuring Call Leg and Signal Processing

To configure the Call Leg and Signal creation mode, open the General page in the Settings WebAdmin realm and click the Cluster link:

Real-Time
Call Legs Processing:   
Signal Processing:   
Call Legs Processing
This setting specifies how the Real-Time Tasks and Call Leg objects should be created with this Cluster member.
Locally
when there is a request to create a Real-Time Task or a Call Leg object, it is created on the same Server (this is the "regular", single-server processing mode).
Locally for Others
Real-Time Task and Call Leg objects are created on the same Server.
The Dynamic Cluster Controller is informed that this Server can create Real-Time Task and Call Leg objects for other Cluster members.
The Dynamic Cluster Controller collects and distributes information about all active Cluster members that have this option selected.
Remotely
when there is a request to create a Real-Time Task or a Call Leg object, a request is relayed to some Cluster member that has this setting set to Locally for Others.
Auto
same as:
Locally
if this Server is not a Dynamic Cluster member.
Locally for Others
if this Server is a Dynamic Cluster frontend.
Remotely
if this Server is a Dynamic Cluster backend.
Signal Processing
This setting specifies how the Signal objects should be created with this Cluster member. Values for this setting have the same meaning as for the Call Legs Processing setting.

SIP

The CommuniGate Pro SIP Farm® feature allows several Cluster members to process SIP request packets randomly distributed to them by a Load Balancer.

Configure the Load Balancer to distribute incoming SIP UDP packets (port 5060 by default) to the SIP ports of the selected SIP Farm Cluster members.
If your Cluster has Frontend Servers, then all or some of the Frontend Servers should be used as SIP Farm members.

To configure the SIP Farm Members, open the General page in the Settings WebAdmin realm and click the Cluster link:

Real-Time
SIP Farm:   
SIP Farm
This setting specifies how the SIP requests should be processed by this Cluster member.
Member
If this option is selected, this Cluster member is a member of a SIP Farm. It processes new requests locally or it redirects them to other SIP Farm members based on the SIP Farm algorithms.
Disabled
If this option is selected, this Cluster member is not a member of a SIP Farm; it will process incoming SIP requests locally.
Relay
If this option is selected, this Cluster member is not a member of a SIP Farm; but when it needs to send a SIP request, it will relay it via the currently available SIP Farm members.
Select this option for Backend Servers that do not have direct access to the Internet and thus cannot send SIP requests directly.
Auto
  • if this Server is not a Dynamic Cluster member, same as Disabled
  • if this Server is a Dynamic Cluster frontend, same as Member
  • if this Server is a Dynamic Cluster backend, same as Relay if there are other Dynamic Cluster members configured as Member, if there are none - same as Disabled
Note: a SIP request can explicitly address some Cluster member (most in-dialog requests do). These requests are always redirected to the specified Cluster member and processed on that member, regardless of the SIP Farm settings.

The CommuniGate Pro Cluster maintains the information about all its Servers with the SIP Farm setting set to Member. Incoming UDP packets and TCP connections are distributed to those Servers using regular simple Load Balancers.

The receiving Server detects if the received packet must be processed on a certain Farm Server: it checks if the packet is a response or an ACK packet for an existing transaction or if the packet is directed to a Node created on a certain Server. In this case the packet is relayed to the proper Cluster member:

Cluster SIP

Packets not directed to a particular Cluster member are distributed to all currently available Farm Members based on the CommuniGate Pro SIP Farm algorithms.

To process a Signal, Cluster members may need to retrieve certain Account information (registration, preferences, etc.). If the Cluster member cannot open the Account (because the Member is a Frontend Server or because the Account is locked on a different Backend Server), it uses the inter-Cluster CLI/API to retrieve the required information from the proper Backend Server.

Several Load Balancer and network configurations can be used to implement a SIP Farm:

Single-IP NAT Load Balancer

This method is used for small Cluster installations, when the frontend Servers do not have direct access to the Internet, and the Load Balancer performs Network Address Translation for frontend Servers.

First select the "virtual" IP address (VIP) - this is the only address your Cluster SIP users will "see":

  • assign the VIP address to the Load Balancer
  • select a "sip-service" DNS domain name (such as sip.mysystem.com), and create a DNS A- or AAAA- record for that name, pointing to the VIP address.
  • create DNS SIP SRV records for all your Cluster Domains pointing to this "sip-service" name.
  • open the Network pages in the Settings realm of the CommuniGate Pro WebAdmin Interface, and specify the VIP address as the Cluster-wide WAN IP address; leave the Server-wide WAN IP Address field empty.

The frontend servers have IP addresses F1, F2, F3, ...

Configure the Load Balancer to process incoming UDP packets received on its VIP address and port 5060:

  • incoming packets should be redirected evenly to F1, F2, F3 frontend server addresses, to the same port 5060.
  • the Load Balancer should not apply any SIP-specific logic to these packets; if your Load Balancer has any SIP-specific options, make sure they are switched off. Some Load Balancers use SIP-specific processing for port 5060 by default: consult with your Load Balancer manufacturer.
  • incoming packets should not create any "session" in the Load Balancer, i.e. the Load Balancer should not keep any information about an incoming UDP packet after it has been redirected to some frontend server.

SIP-specific techniques implemented in some Load Balancers allow them to send all "related" requests to the same server. Usually these techniques are based on the request Call-ID field and thus fail very often. CommuniGate Pro SIP Farm technology ensures proper request handling if a request or response packet is received by any SIP Farm member. Thus, these SIP-specific Load Balancer techniques are not required with CommuniGate Pro.

Many Load Balancers create "session binding" for incoming UDP requests, in the same way they process incoming TCP connections - even if they do not implement any SIP-special techniques.
The Binding table for some Load Balancer port v (and the Load Balancer VIP address) contains IP address-port pairs:

X:x <-> F1:f
where X:x is the remote (sending) device IP address and port, and F1:f is the frontend Server IP address and port the incoming packet has been forwarded to.
When the remote device re-sends the request, this table record allows the Load Balancer to send the request to the same frontend Server (note that this is not needed with the CommuniGate Pro SIP Farm).

These Load Balancers usually create "session binding" for outgoing UDP requests, too: when a packet is sent from some frontend address/port F2:f to some remote address/port Y:y, a record is created in the Load Balancer Binding table:

Y:y <-> F2:f

When the remote device sends a response packet, this table record allows the Load Balancer to send the response to the "proper" frontend Server (note that this is not needed with the CommuniGate Pro SIP Farm).

CommuniGate Pro SIP Farm distributes SIP request packets by relaying them between the frontend Servers, according to the SIP Farm algorithms; the SIP Farm algorithms redirect the SIP response packets to the frontend Server that has sent the related SIP request.
These CommuniGate Pro SIP Farm features make the Load Balancer "session binding" table useless (when used for SIP UDP)

The Load Balancer "session binding" must be switched off (for SIP UDP), because it not only creates unnecessary overhead, but it usually corrupts the source address of the outgoing SIP packets:
When a Load Balancer receives a SIP request packet from X:x address, and relays it to the frontend Server F1:5060 address/port, the SIP Farm can relay this request to some other frontend Server (the F2:5060 address/port), where a SIP Server transaction will be created and the request will be processed.
SIP responses will be generated with this frontend Server, and the SIP response packets will be sent out to X:x from the F2:5060 address/port (via the Load Balancer).
If the Load Balancer does not do any "session binding", it should simply change the packet source address from F2:5060 to VIP:5060, and redirect it to X:x.

If the Load Balancer does implement UDP "session binding", it expects to see the response packets from the same F1:5060 address only; it will then redirect them to X:x after changing the response packet source address from F1:5060 to VIP:5060.
Packets from other servers (for which it does not have a "session binding") are processed as "outgoing packets", and a Load Balancer builds a new "session binding" for them (see above). In our case, when a Load Balancer sends a request from X:x to F1:5060, and gets a response from F2:5060, it would have to create the second "session binding":

X:x <-> F1:5060
X:x <-> F2:5060
These are conflicting "session binding" for most Load Balancers, and in order to solve the conflict the Load Balancer will use NAT techniques and change not only the source address of the outgoing packet, but its source port, too - so the response packet will be sent to X:x with the source address set not to VIP:5060, but to VIP:5061 (or any other source port the Load Balancer uses for NAT). Many SIP devices, and most SIP devices behind firewalls will not accept responses from the VIP:5061 address/port, if they have sent requests to VIP:5060 address/port.

It is very important to consult with your Load Balancer manufacturer to ensure that the Load Balancer does not use "session binding" for UDP port 5060 - to avoid the problem described above.

Multi-IP NAT Load Balancer

In this configuration frontend Servers have direct access to the Internet (they have IP addresses directly "visible" from the Internet).

  • The Load Balancer redirects incoming requests to these real F1, F2, F3... frontend Server addresses.
  • The Load Balancer should be implemented as the a switch, so outgoing traffic from frontend Servers will pass via the Load Balancer.
  • The Load Balancer should change the source IP of all outgoing SIP UDP packets coming from frontend Servers (from Fn:5060) to VIP:5060.

Load Balancers with UDP "session binding" will have the same problems as described above.

DSR Load Balancer

DSR (Direct Server Response) is the preferred Load-Balancing method for larger installations.

To use the DSR method, create an "alias" for the loopback network interface on each Frontend Server. While the standard address for the loopback interface is 127.0.0.1, create an alias with the VIP address and the 255.255.255.255 network mask:

Solaris
ifconfig lo0:1 plumb
ifconfig lo0:1 VIP netmask 255.255.255.255 up
To make this configuration permanent, create the file /etc/hostname.lo0:1 with the VIP address in it.
Linux
ifconfig lo:0 VIP netmask 255.255.255.255 up
To make this configuration permanent, create the file /etc/sysconfig/network-scripts/ifcfg-lo:0:
DEVICE=lo
IPADDR=VIP
NETMASK=255.255.255.255
ONBOOT=yes

Make sure that the kernel is configured to avoid ARP advertising for this lo interface (so the VIP address is not linked to any Frontend server in arp-tables). Subject to the Linux kernel version, the following commands should be added to the /etc/sysctl.conf file:

# ARP: reply only if the target IP address is
# a local address configured on the incoming interface
net.ipv4.conf.all.arp_ignore = 1
#
# When an arp request is received on eth0, only respond
# if that address is configured on eth0.
net.ipv4.conf.eth0.arp_ignore = 1
#
# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2
# When making an ARP request sent through eth0, always use an address
# that is configured on eth0 as the source address of the ARP request.
net.ipv4.conf.eth0.arp_announce = 2
#
# Repeat for eth1, eth2 (if exist)
#net.ipv4.conf.eth1.arp_ignore = 1
#net.ipv4.conf.eth1.arp_announce = 2
#net.ipv4.conf.eth2.arp_ignore = 1
#net.ipv4.conf.eth2.arp_announce = 2
FreeBSD
To change the configuration permanently, add the following line to the /etc/rc.conf file:
ifconfig_lo0_alias0="inet VIP netmask 255.255.255.255"
other OS
consult with the OS vendor
  • When this "alias" is created, the frontend Servers do not respond to the "arp" requests for the VIP address, and all packets directed to the VIP address will be routed to the Load Balancer.
  • When the Load Balancer uses the "DSR" method, it does not change the target IP address (VIP) of the incoming packets. Instead the Load Balancer redirects incoming packets using the network-level (MAC) address of the selected frontend Server.
  • Because the frontend Server has the VIP address configured on one of its interfaces (on the loopback interface), it accepts this packet as a local one, and passes it to the application listening on the specified TCP or UDP port.
  • Because the frontend Server has the VIP address configured on one of its interfaces, response and other outgoing packets can be sent using the VIP address as the source address. If these packets come via the Load Balancer (they can bypass it), the Load Balancer should not modify them in any way.

Note: Because MAC addresses are used to redirect incoming packets, the Load Balancer and all frontend Servers must be connected to the same network segment; there should be no router between the Load Balancer and frontend Servers.

Note: when a network "alias" is created, open the General Info page in the CommuniGate Pro WebAdmin Settings realm, and click the Refresh button to let the Server detect the newly added IP address.

The DSR method is transparent for all TCP-based services (including SIP over TCP/TLS), no additional CommuniGate Pro Server configuration is required: when a TCP connection is accepted on a local VIP address, outgoing packets for that connection will always have the same VIP address as the source address.

To use the DSR method for SIP UDP, the CommuniGate Pro frontend Server configuration should be updated:

  • use the WebAdmin Interface to open the Settings realm. Open the SIP receiving page in the Real-Time section
  • follow the UDP Listener link to open the Listener page
  • by default, the SIP UDP Listener has one socket: it listens on "all addresses", on the port 5060.
  • change this socket configuration by changing the "all addresses" value to the VIP value (the VIP address should be present in the selection menu).
  • click the Update button
  • create an additional socket to receive incoming packets on the port 5060, "all addresses", and click the Update button
Now, when you have 2 sockets - the first socket for VIP:5060, the second one for all addresses:5060, the frontend Server can use the first socket when it needs to send packets with VIP source address.
Repeat this configuration change for all frontend Servers.

RTP Media

Each Media stream terminated in CommuniGate Pro (a stream relayed with a media proxy or a stream processed with a media server channel) is bound to a particular Cluster Member. The Load Balancer must ensure that all incoming Media packets are delivered to the proper Cluster Member.

Single-IP Method

The "single-IP" method is useful for a small and medium-size installations.
The Cluster Members have internal addresses L1, L1, L3, etc.
The Load Balancer has an external address G0.
The Network Settings of each Cluster Member are modified, so the Media Ports used on each Member are different: ports 10000-19999 on the L1 Member, ports 20000-29999 on the L2 Member, ports 30000-39999 on the L3 Member, etc.
All packets coming to the G0 address to the standard ports (5060 for SIP) are distributed to the L1, L2, L3 addresses, to the same ports.
All packets coming to the G0 address to the media ports are distributed according to the port range:
  • packets coming to the ports 10000-19999 are directed to the L1 address (without port number change)
  • packets coming to the ports 20000-29999 are directed to the L2 address (without port number change)
  • packets coming to the ports 30000-39999 are directed to the L3 address (without port number change)

The Server-wide WAN IP Address setting should be left empty on all Cluster Members.
The Cluster-wide WAN IP Address setting should specify the G0 address.

This method should not be used for large installations (unless there is little or no media termination): it allows you to allocate only 64000 ports for all Cluster media streams (each AVP stream takes 2 ports, so the total number of audio streams is limited to 32000, and if video is used (together with audio), such a Cluster cannot support more than 16,000 concurrent A/V sessions.

Multi-IP No-NAT Load Balancer

The "multi-IP" method is useful for large installations. Each frontend has its own IP address, and when a Media Channel or a Media Proxy is created on that frontend Server, this unique IP address is used for direct communication between the Server and the client device or remote server.

The Network Settings of each Cluster Member can specify the same Media Port ranges, and the number of concurrent RTP streams is not limited by 64000 ports.

In the simplest case, all frontend Servers have "real" IP Addresses, i.e. they are directly connected to the Internet.

If the Load Balancer uses a DSR method (see above), then it should not care about the packets originating on the frontend Servers from non-VIP addresses: these packets either bypass the Load Balancer, or it should deliver them without any modification.

If the Load Balancer uses a "normal" method, it should be instructed to process "load balanced ports" only, while packets to and from "other ports" (such as the ports in the Media Ports range) should be redirected without any modification.

Multi-IP NAT Method

You can use the Multi-IP method even if your frontend Servers do not have "real" IP Addresses, but they use "LAN"-type addresses L1, L1, L3, etc.

Configure the Load Balancer to host real IP Addresses G1, G2, G3,... - in addition to the VIP IP Address used to access CommuniGate Pro services.

Configure the Load Balancer to "map" its external IP address G1 to the frontend Server address L1, so all packets coming to the IP Address G1, port g (G1:g) are redirected to the frontend Server address L1, same port g (L1:g). The Load Balancer may change the packet target address to L1, or it may leave it as is (G1); When the Load Balancer receives a packet from the L1 address, port l (L1:l), and this port is not a port involved in a load balancing operations (an SMTP, POP, IMAP, SIP, etc.), the Load Balancer should redirect the packet outside, replacing its source address from L1 to G1: L1:l->G1:l.

Configure the Load Balancer in the same way to "map" its external IP addresses G2, G3, ... to the other frontend Server IP addresses L2, L3...

Configure the CommuniGate Pro frontend Servers, using the WebAdmin Settings realm. Open the Network pages, and specify the "mapped" IP addresses as Server-wide WAN IP Addresses: G1 for the frontend Server with L1 IP address, G2 for the frontend Server with L2 IP address, etc.


CommuniGate Pro Guide. Copyright © 2020, AO StalkerSoft