Â
Database (MongoDB)
Configuration, call data (like registrations and subscriptions) and more and more data is now stored in MongoDB, a HA capable database clustering system. Â You have a fair amount of flexibility now in how you want to structure your database cluster and control how systems fail-over.
MongoDB Architecture
For a High Availability (HA) environment, sipXecs requires more than one server for redundancy and resiliency. In past implementations of sipXecs, post 4.6, two servers at a minimum were required, with these two servers providing fail-over and recovery in the event of a server fault or failure.
With the addition MongoDB, the requirements for an HA environment have changed significantly. The introduction of MongoDB has enhanced the failover features of the system, and provided the ability to scale to much bigger implementations of sipXecs, along with that comes new requirements for HA environments.
There will need to be substantial document created for support of this database, as information is gleaned from the mailing list, it will be added to this section to improve the available documentation.
Â
When planning for the number of servers within the design of your network, consider the following facts:
In the event of a failure of the network, a surviving set of MongoDB servers needs to have a majority of the votes:
in a 2 node system this is BAD - 1 surviving and 1 dead (50% votes)
in a 3 node system this is OK - 2 surviving and 1 dead (66% votes)
in a 3 node system this is BAD - 1 surviving and 2 dead (33% votes)
in a 4 node system this is OK - 3 surviving and 1 dead (75% votes)
in a 4 node system this is BAD - 2 surviving and 2 dead (50% votes)
so votes have to be > 50%
Â
NOTE: If you're phones and servers are in different subnets, (e.g. Boston office, Dallas office) or (East Building, West Building) and the link between the subnets is broken, no matter how many mongo servers you have, one set of users will not work. sipXecs 4.6 Update 8 addresses this. There are rpms in sipxecs unstable that are ready for testing (9/9/13).
Â
Â
WIP