Mongo Replication
At a glance
In 4.4 sipXconfig
Mongo
In 4.6 the replication concept remained the same: sipXconfig gathered data from the UI, stores it in the SIPXCONFIG PostgreSQL database and writes it to a location accessible by other services. However, IMDB replication is centered around "reblicable entities". Still there are configuration files but they are handled now by cfengine. Those files are small and do not create OOM on their creation or XML-RPC problems on their transfer. For instance, dialing rules and service configurations are still handled this way.
However, a decision was taken to change the IMDB from fastDB to MongoDB. MongoDB has the advantage of being created to handle large amount of data (mongo comes form humongous) and having its own replication mechanism across nodes. We are also taking advantage of the Mongo's )replica set_ concept. You can read more on MongoDB on their website: www.mongodb.org.
Replicable entities
A replicable entity is an entity that will be written to MongoDB in its own document that can be retrieved by any service at any location. Such an entity (read Java object) will basically implement the Replicable interface. For instance, org.sipfoundry.sipxconfig.common.User is a replicable entity. We take for granted that each Replicable is a BeanWithId. Replicable interface has a few methods that an entity must implement.
public Set<DataSet> getDataSets(); public String getIdentity(String domainName); public Collection<AliasMapping> getAliasMappings(String domainName); public boolean isValidUser(); public Map<String, Object> getMongoProperties(String domain);
A replicable entity may or may mot define a set of DataSet_s. A _DataSet may be regarded as a set of common properties that are written to the Mongo document through a DataSetGenerator. Each DataSet has its own DataSetGenerator that extends _AbstractDataSetGenerator. AbstractDataSetGenerator is used only to inject some common properties (like the CoreContext, the SIP domain)
and to define abstract methods that dataset generators need to implement.
DataSet Generators
The DataSet generators are in org.sipfoundry.sipxconfig.commserver.imdb package and extend AbstractDataSetGenerator. They are responsible for actually preparing the document that will be written to Mongo. The actual write will be done in ReplicationManagerImpl. If you take a look at org.sipfoundry.sipxconfig.commserver.imdb.SpeedDials you will see good examples of constructing the Mongo document object. You can also check out Mongo Java API (http://api.mongodb.org/java/2.6.3/).
MongoConstants
org.sipfoundry.commons.mongo.MongoConstants is the common place where field names are defined. sipXcommons project is accessible by any java project.
ReplicationTrigger
Until Mongo introduction replication triggers were scattered across different implementations. While not perfect, we tried to define a common place for all replication triggers, at least for Mongo replications. As in sipXconfig, save*(Object) and delete*(Object) methods are intercepted we figured this would be a good way to trigger Mongo replications. This is where org.sipfoundry.sipxconfig.admin.commserver.imdb.ReplicationTrigger comes into play. It implements DaoEventListener interface which defines two methods - public void onSave(Object entity) and public void onDelete(Object entity) that get triggered by saving or deleting an Object. ReplicationTrigger implementation of the 2 methods will mainly call the replication manager based on some conditions.
Replication Manager
The replication manager bean (org.sipfoundry.sipxconfig.admin.commserver.imdb.ReplicationManagerImpl) main function is to delegate the replication of the entity to the relevant DataSetGenerator. It is also responsible for the parallel asynchronous replication of groups of replicable entities (groups, branches) and for the regeneration of the entire entity collection. It used to also hold the business methods to actually build the service config files that needed to be replicated and delegated the actual replication to the supervisor on the specified location. In 4.6 files are replicated using newly introduced cfengine and is not object of this page.
defines a few methods common to all DataSet generators, such as: public DBCollection getDbCollection() and public static String getEntityId(Replicable entity). getDbCollection() will instantiate the Mongo collection in which the entities are written (currently imdb.entity); also here we can apply some properties to the mongo connection/collection. For instance, we may define indexes:
DBObject indexes = new BasicDBObject(); indexes.put(MongoConstants.TIMESTAMP, 1); entity.createIndex(indexes);
getEntityId(Replicable entity) method defines the unique identity of the document as defined here: http://www.mongodb.org/display/DOCS/Object+IDs. The identity is unique and it is formed by the simple class name of the entity and the id of the object from PostgreSQL. It is by default indexed by Mongo.
In turn, DataSetGenerator class implements the public DBObject findOrCreate(Replicable entity) method that is a business method that finds an object in the Mongo entity collection or creates it if it does not exist. It also adds some properties to the object like MongoConstants.IDENTITY (ident) or properties defined by each replicable entity in the getMongoProperties method. Note that the actual saving is done by each DataSetGenerator class (for instance org.sipfoundry.sipxconfig.admin.commserver.imdb.Aliases) in public void generate(Replicable entity, DBObject top) method (you'll notice a call to getDbCollection().save(top)). Also the generate method performs the actual business that will fill in the relevant information of the Mongo document object. For instance, Aliases generate method will retrieve all aliases of the entity and add them to the object in a well defined structure. org.sipfoundry.sipxconfig.admin.commserver.imdb.Mailstore.generate(Replicable entity, DBObject top) retrieves all information pertinent to a user's mailstore like email address, IMAP server configuration, etc. All the information written to Mongo in a document was once stored in files. For a user we had information scattered around in different files, now the most part is kept in Mongo.