scalability - Stages of scaling a Java EE application -
i curious how professional programmers scale web application. have made significant research effort failed information stages of scaling, might related fact server performance depends on many factors. however, pretty sure details can laid down approximately.
for instance,
1.) how many concurrent request can single tomcat server handle decent implementation , decent hardware?
2.) @ point should load-balancer server involved?
3.) when full java ee stack (jboss/glassfish) begin make sense?
i feel opinion based but, ultimately, "it depends".
for example, how load can tomcat handle? depends. if you're sending static html page every request answer "alot". if you're trying compute first 100,000 prime numbers every time not much.
in general, best try design application clustering/distributed use. don't count on in session - keeping sessions in sync can expensive. best have every method stateless. can hard consumer (i.e. web site) may have pass bit more information on each call of clustered machines know current state of request. , on.
i moved web-app tomcat glassfish , wildfly when wanted take advantage of additional java ee functionality - jms, cdi, , jpa. have used tomee , bolted in unified environment unified management ui nice benefit too. may never need though. can add parts want (i.e. cdi , jpa) tomcat.
note didn't move tomcat full ee server performance - wanted take advantage of larger part of ee stack. while wildfly has management interfaces make managing cluster bit easier have still used tomcat no problem.
so, again, "it depends". if don't have need more of ee stack tomcat provides full ee server may overkill. putting set of tomcat servers behind apache httpd load balancer (or amazon one) on top of database clustered isn't bad implement. if sufficient i'd stick that. don't jump wildfly, etc. performance not see huge change either direction.