Separate Robots for HTTP and HTTPS traffic
unjc email
unjc.email at gmail.com
Wed Jan 30 22:29:55 UTC 2013
Is it possible to configure a workload so that it could generate
constant HTTP load with fixed number of robots while ramping HTTPS
load with increasing HTTPS robots? Please kindly provide a workload
sample of how to do this.
For example, 200 robots are dedicated to generate HTTP requests
constantly, acting as like background traffic; at the same time,
another group of robots is ramping from 1 to 100 that is responsible
for generating HTTPS requests.
Thanks and greatly appreciate,
Jacky
On Fri, Dec 21, 2012 at 7:45 PM, Alex Rousskov
<rousskov at measurement-factory.com> wrote:
> On 12/21/2012 01:38 PM, unjc email wrote:
>
>> I have already set up a workload that generates mixed http/https
>> traffic. Since there is an issue with the https proxy, the http
>> traffic is heavily affected because the same robots are responsible
>> for both types of traffic.
>
> Just FYI: This is, in part, a side-effect of your best-effort workload.
> In constant-pressure workloads (Robot.req_rate is defined), individual
> Robot transactions may share open connection limits but not much else,
> and so SSL proxy problems do not decrease HTTP traffic rate.
>
> Please be extra careful with best-effort workloads as they often produce
> misleading results.
>
>
>> Would any of you please advise how I could
>> configure two kinds of robots (one for http and one for https) binding
>> to two different loop-back IP pools?
>
> Just define and use to Robot objects. You already have two Server
> objects. You can do the same with Robots.
>
> If you want to reduce PGL code duplication, you can use this trick:
>
> Robot rCommon = {
> ... settings common to both robots ...
> };
>
> Robot rSecure = rCommon;
> rSecure = {
> ... settings specific to the SSL robot ...
> };
>
> Robot rPlain = rCommon;
> rPlain = {
> ... settings specific to the HTTP robot ...
> };
>
> // use both robots after finalizing all their details
> use(rSecure, rPlain);
>
>
>
>> I understand it will probably not be possible to distribute the load
>> through using Robot's origins (origins = [M1.names, M2.names: 10%];)
>> anymore. I assume I could try to dedicate different number of robots
>> for each robot type.
>
> Yes, but you can apply a similar trick to robot addresses instead of
> origin addresses:
>
> // compute addresses for all robots
> theBench.client_side.addresses =
> robotAddrs(authAddrScheme, theBench);
>
> // randomly split computed addresses across two robot categories
> [ rSecure.addresses: 10%, rPlain.addresses ] =
> theBench.client_side.addresses;
>
>
> HTH,
>
> Alex.
>
>
>
>> Bench theBench = {
>> peak_req_rate = 1000/sec;
>> client_side = {
>> hosts = ['192.168.128.36','192.168.128.37'];
>> addr_space = ['lo::172.1.2-250.20-30'];
>> max_host_load = theBench.peak_req_rate/count(client_side.hosts);
>> max_agent_load = theBench.peak_req_rate/totalRobots;
>> };
>> server_side = {
>> hosts = ['192.168.102.206','192.168.102.207'];
>> max_host_load = theBench.peak_req_rate;
>> max_agent_load = theBench.peak_req_rate;
>> };
>> };
>>
>> Server S1 = {
>> kind = "S101";
>> contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent:
>> 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther];
>> direct_access = contents;
>> addresses = M1.addresses;
>> http_versions = ["1.0"];
>> };
>>
>> Server S2 = {
>> kind = "S101";
>> contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent:
>> 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther];
>> direct_access = contents;
>> SslWrap wrap1 = {
>> ssl_config_file = "/tmp/ssl.conf";
>> protocols = ["any"];
>> ciphers = ["ALL:HIGH": 100%];
>> rsa_key_sizes = [1024bit];
>> session_resumption = 40%;
>> session_cache = 100;
>> };
>> ssl_wraps = [wrap1];
>> addresses = M2.addresses;
>> http_versions = ["1.0"];
>> };
>>
>> Robot R = {
>> kind = "R101";
>> pop_model = {
>> pop_distr = popUnif();
>> };
>> recurrence = 50%;
>> req_rate = undef();
>> origins = [M1.names, M2.names: 10%];
>> credentials = select(totalMemberSpace, totalRobots);
>> SslWrap wrap1 = {
>> ssl_config_file = "/tmp/ssl.conf";
>> protocols = ["any"];
>> ciphers = ["ALL:HIGH": 100%];
>> rsa_key_sizes = [1024bit];
>> session_resumption = 40%;
>> session_cache = 100;
>> };
>> ssl_wraps = [wrap1];
>> addresses = robotAddrs(authAddrScheme, theBench);
>> pconn_use_lmt = const(2147483647);
>> idle_pconn_tout = idleConnectionTimeout;
>> open_conn_lmt = maxConnPerRobot;
>> http_versions = ["1.0"];
>> };
>> _______________________________________________
>> Users mailing list
>> Users at web-polygraph.org
>> http://www.web-polygraph.org/mailman/listinfo/users
>>
>
More information about the Users
mailing list