Search the Community
Showing results for tags 'aws'.
Found 3 results
Hello everybody, I have the following escenary: I have a EC2 instances cluster and a RDS Database MySQL server, all in AWS. Our website has 1500 visitors per minute, and in a specific hour of the day, the connections pass from 10 to 1200 per second, with 3 EC2 intances, then there are some intermittences. This is my revive DB configuration [database] type=mysql host="my.db.com" socket="/var/run/mysqld/mysqld.sock" port=3306 username=revive password="mypass" name=ultra_ads persistent= mysql4_compatibility=1 protocol=tcp compress= ssl= capath= ca= [databaseCharset] checkComplete=1 clientCharset= [databaseMysql] statisticsSortBufferSize= [databasePgsql] schema= Please, tell me if you need any more information or configuration file, I'm really new to this kind of service.
I'm running an AWS instance that utilizes Webuzo's Revive Ad Server installation. I'm not sure if this matters, but the Webuzo installer creates a new linux user, let's call it 'adserver', then has a public_html and www folder inside that user folder to manage the installation. Long story short, the IP has changed on the instance and I'm getting a 404 for any revive url, including the admin panel and root url / new IP. I updated the config files in /var/ with the new IP, but no dice still. Is there some way to 'reinitialize' Revive, maybe by running the init.php again or removing the INSTALLED filed in config and /var/ ? The DB isn't remote... Webuzo runs it off of localhost, so on the actual EBS volume I'm guessing. I'm totally lost here. Any direction would be greatly appreciated.
Hey guys, I'm having a hard time setting up a large scale setup for revive 3.0.3. I'll explain what I'm doing and see if you could help me out on scaling things out or with something that I'm missing. My current architecture follows the bellow setup: 2x Portal - (EC2 m1.large) = ~ run smooth (30% cpu) from 0-5000 visitors (google realtime) or ~ 600 visits from nginx_status/fpm_status pages PHP+FPM with pm.dynamic and pm.maxchildren 512, with sysctl configs effective setted. no OPCache setted yet. 2x Revive - (EC2 m1.large) = ~ run critical and stops PHP+FPM with pm.dynamic and pm.maxchildren 512, with sysctl configs effective setted. no OPCache setted yet. http://imgur.com/0D7kgvd 1 RDS - (db.m1.xlarge) = was running high on connection count mostly because table locking, solved after migrating from MyISAM to InnoDB, and LG caching on nginx (so no visits/impression count) just clicking Runs the queries for the ad server to work, the picture bellow shows the situation for the setup http://imgur.com/1e86Gqe The PHP portal makes server side requests for the adserver ones, one of the main pages has at least 90 banners, so when it opens, it'll spawn as much as 1x90 multicurl per visit for each view. After that the client side will show the banners correctly and also make more 90 request for the LG's beacons. The problem now is to scale that thing out, am I missing something? how to run such a massive delivery/impression log with RDS? should I try differently? Thanks in advance for the help and please, make your self comfy for asking for more configs or any other questioing.