Our technology
The Backend solution is designed to be used in modern high load games on PC, console and mobile platforms.
The Backend solution is built using a micro service architecture and based on multiple loosely-coupled minor Nodes
-
Ability to setup the architecture for every major region and data-center, allowing load balance between them
-
Equal distribution of balance if new modules are launched or obsolete ones shut down
-
Quick horizontal and vertical scaling, launching new location
-
Ability to roll out replaceable modules with various functions
-
Seamless updates with out idling infrastructure
-
Game services nodes
-
Balancer
-
Lobby
-
PvP
-
Shared Data
-
Service for storage shared and intermediate data (Hazelcast IMDG)
-
Service of monitoring of infrastructure (Consul)
-
-
-
Data Bases nodes
-
Distributed DB for authorization and sharding data information storage (Cassandra DB)
-
DB cluster for storing player profiles (Stolon/PostgreSQL)
-
Description:
Balancer offers clients base settings and also allows to spread equally the load on lobby services. Access to balancer is based on Round Robin algorithm.
Rollout and Setup:
Service is configured and rolled out automatically with Jenkins + Ansible. Basic settings: interaction with Consul, notification of services availability frequency. Server IP address to be set up in A-domain.
Other modules interaction:
The client uses Rest API to access the balancer, the balancer interacts with Consul also using Rest API.
Replacement:
Could be replaced with balance node HAProxy/Keepalived and services with more complex logic. It is also easy to expand the service logic.
Why:
The service offers high fault-tolerance due to its clustering. This complex solution provides balance load to the client with the full lobby list with amount of players and load state, so the client can select the less loaded node.
Description:
Lobby authorizes clients in the system allowing access to DBs and profiles, providing business-logic.
Rollout and Setup:
Service is configured and rolled out automatically with Jenkins + Ansible.
Other modules interaction:
Client: connects to the Lobby using Web-Socket (stomp+json) and holds connection till the end.
Consul (Rest API): reports service availability to connect players
Hazelcast (Rest API): gets and stores client data, serves as a temporary storage
Replacement:
The service can be replaced with other similar services or launched in parallel. The most important part is API interaction to be created properly.
Why:
The service offers high fault-tolerance due to its clustering. This approach allows to hot-launch new Lobby services and shut them down when not needed.
Description:
PVP service provides the game logic for session-based coop real time gaming
Rollout and Setup:
Service is configured and rolled out automatically with Jenkins + Ansible.
Other modules interaction:
Client: connects to PvP thru socket (tcp/udp) and holds connection till the end of game session.
Consul (Rest API): reports service availability to connect players
Hazelcast (Rest API): gets and stores game data, serves as a temporary storage, for active races and results
Replacement:
The service can be replaced with other similar services or launched in parallel. The most important part is API interaction to be created properly.
Why:
The service offers high fault-tolerance due to its clustering. SmartFox server can be used as one of the solution for session games. Clustered data structure allows to dynamically turn services on and off and also maintain active connection to the client allowing seamless updates.
Description:
Hazelcast is an in-memory data storage for interactions with the client and services data exchange.
Rollout and setup:
Service is configured and rolled out automatically with Jenkins + Ansible.
Other modules interaction:
Cassandra (Spring Data JPA Driver): obtains data of PostgreSQL sharding where the client data is stored.
PostgreSQL (Spring Data JPA Driver): obtains client data.
Lobby (Rest API): grants access and management to gamers data, to temporary data, such as active races.
PvP (Rest API): grants access to intermediate data, such as list of users who are waiting for the race to start.
Replacement :
Could be replaced with any in-memory data storage with clusterazation option.
Why:
Building clusters allows to have one centralized fault-tolerant data access center to all data.
Description:
Consul collects data about all launched services in the infrastructure. When launched each service reports its status to Consul and automatically incorporate in the existing infrastructure.
Rollout and setup:
Service is configured and rolled out automatically with Jenkins + Ansible.
Other modules interaction:
Lobby, PvP, Balancer (Rest api): gets and stores information on services status.
PostgreSQL(Rest api): gets and stores information on DB cluster nodes.
Replacement:
Can be replaced with any service discovery solution with clusterization option.
Why:
Service monitors statuses of all the element of micro-service infrastructure and provides real time actual list of launched services, notifying when DevOps engineer attention is needed.
Description:
Cassandra stores tables with pointing data to player profiles with third-party services and social networks (GameCenter, GooglePlay, Facebook) and links to final player profile in certain PostgreSQL shard where it is stored all player data.
Rollout and setup:
Service is configured and rolled out automatically with Jenkins + Ansible.
Other modules interaction:
Hazelcast (Spring Data JPA):
Translates data where in PostgreSQL clusters addressed player profiles are stored
Replacement:
Can be replaced with any high speed data storage solution that supports cross-data center interaction.
Why:
Key functionality of Cassandra is a high speed data exchange between clusters in different data centers and high level of fault-tolerance. And it provides a reserve for any information between clusters in case of failure of some of them (even between data-centers).
Description:
PostgreSQL cluster is realized via Stolon Cluster (Go Lang) and consists of 3 services:
-
Keeper: provides DB access and replicates data
-
Sentinel: watches for Keeper-service state, sets master/slave mode for Database),
-
Proxy: balances the load of cluster elemens
We recommend to use 3 instances of each services – one in a master mode and two as slaves.
Rollout and setup:
Service is configured and rolled out automatically with Jenkins + Ansible.
Other modules interaction:
Hazelcast (Spring Data JPA): translates player profile data
Replacement:
Can be replaced with any other Database that supports clusters.
Why:
Multiple DB clusters allow high fault-tolerance for the Node. Master/Slave configuration with dynamic roles change saves the read/write load and ensures the perfect performance. In case of failure of any cluster it’s automatically is dropped from the cluster structure without any data loss as all data is replicated multiple times.
-
Back-end solution isn’t limited by existing nodes and can be extended by any additional functionality nodes and services. Upon changing existing Nodes or adding new Nodes the only requirement is a usage of the common shared data structure.
-
Existing Nodes can be split on separate nodes to optimize the performance if needed.
-
The solution allows setting up of third-party monitoring services, analytics and logs gathering.
-
The solution supports various version of any Node if required to set up an A/B testing or provide one variant of project functionality to some group of users while other users use another variant.
-
Automatic back-end infrastructure setup available with usage of according software solutions - ansible + jenkins or kubernetes + docker.available
-
After initial setup is complete the system requires only monitoring and upkeep. No additional service is required.
-
Provides cross-regional and cross-datacenter back end infrastructure solution without additional costs for development if project is operated across various regions and data-centers.
-
Back end software architecture is open to any additional services or functionality if necessary:
-
Lobby: Required functionality can be added to Lobby Node as a separate classes
-
Game Logic: Allows usage if any game logic module depending on the project with game specific functionality.
-