Jump to content

[Problem] Scaling Out Rds, Nginx Adserver And Nginx Portal


Recommended Posts

Hey guys,

 

    I'm having a hard time setting up a large scale setup for revive 3.0.3. I'll explain what I'm doing and see if you could help me out on scaling things out or with something that I'm missing.

 

My current architecture follows the bellow setup:

  1. 2x Portal - (EC2 m1.large) = ~ run smooth (30% cpu) from 0-5000 visitors (google realtime) or ~ 600 visits from nginx_status/fpm_status pages
    1. PHP+FPM with pm.dynamic and pm.maxchildren 512, with sysctl configs effective setted. no OPCache setted yet.
  2. 2x Revive - (EC2 m1.large) = ~ run critical and stops
    1. PHP+FPM with pm.dynamic and pm.maxchildren 512, with sysctl configs effective setted. no OPCache setted yet.
    2. http://imgur.com/0D7kgvd
  3. 1 RDS - (db.m1.xlarge) = was running high on connection count mostly because table locking, solved after migrating from MyISAM to InnoDB, and LG caching on nginx (so no visits/impression count) just clicking
    1. Runs the queries for the ad server to work, the picture bellow shows the situation for the setup
    2. http://imgur.com/1e86Gqe

 The PHP portal makes server side requests for the adserver ones, one of the main pages has at least 90 banners, so when it opens, it'll spawn as much as 1x90 multicurl per visit for each view. After that the client side will show the banners correctly and also make more 90 request for the LG's beacons.

 

The problem now is to scale that thing out, am I missing something? how to run such a massive delivery/impression log with RDS? should I try differently?

 

Thanks in advance for the help and please, make your self comfy for asking for more configs or any other questioing.

Link to comment
Share on other sites

I'm sorry to say this, but all the above doesn't make much sense to me.

 

1. Production servers with should always have an opcode cache

2. 90 server side requests per each page is nuts

3. if you don't want impression logging, just disable it in the settings

 

Hey Matteo,

 

    I was in rush to explain it, let me clarify that. We're configuring opscache on those servers. The load for those adserver still so high that I can't figure out what's wrong. The Portal its a land page millions of access per day, so it's ok to have it like 90ads on it, don't think it's insane or nuts. 

   The third one is needed because of the CTR, doesn't make sense to have only the clicks or maybe I don't understand the relations between impressions vs views.

Link to comment
Share on other sites

I think that spawning 90 requests per each page view on the portal is way too much. Ideally you'd have to have 180 instances running the adserver if the load generated on the portal for one page view is roughly the same as one ad request.

 

IMHO you should review the way you're doing things atm and maybe opting for something like SPC (i.e. one client side request to fetch multiple zones). Still it will be very heavy on the adserver to go through the ad selection algorithm 90 times, but much better than having 90 separate calls. Even in that case impression logging should be disabled so that you don't send 90 impressions beacons to the client, but with some JS you might be able to dynamically append one that triggers impressions for all the ad/zone combinations displayed on the page.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...