Normally Jita reaches a maximum of about 800-900 pilots on any given Sunday. On the Friday following the deployment of StacklessIO, 19 September, there were close to 1,000 concurrent pilots in Jita and on the Saturday, 20 September, the maximum number reached 1,400. This is more than have ever been in Jita at the same time. Buy gold on wow Under our old network technology Jita could become rather unresponsive at 800-900 pilots but on the Sunday, 21 September, it was quite playable and very responsive with 800 pilots, thanks to StacklessIO.
Alas, there were teething problems. At 1,400 pilots the node hosting Jita ran out of memory and crashed. As crazy as it may sound this was very exciting since we had not been in the position before to be able to have that problem, as Jita would lag out before reaching that point under our old network technology. We immediately turned our attention to solving the challenge of giving the EVE server more memory to access.
The normal setup in the cluster for the server nodes is that each blade has two 64-bit processors, 4 GB of memory and runs Window Server 2003 x64. Each blade runs two nodes and each node then hosts a number of solar systems. There are also dedicated nodes for the market, dedicated nodes for corporation services, a dedicated head node for the cluster, etc.
Finally there is a pool of dedicated dual-CPU, dual-core, eve isk machines that only run a single EVE64 node per machine. Jita and four other high use solar systems are assigned to that pool. That pool is now running all native 64-bit code and the blades have been upgraded to 16 GB of memory. These blades also have more powerful CPUs which has helped as well. We are currently working with our vendors on testing out even more powerful hardware options now that we can utilise the hardware much better.
This Monday, 29 September, we saw a fleet battle with over 1100 pilots reported in local. Field reports indicate that the fight was quite responsive for the first 10 minutes but then the node "missed its heart beat" as we call it and was removed from the cluster by our cluster integrity watchdog routines. This again is another exciting problem as we can address that as well under our StacklessIO world.
|