Separate Robots for HTTP and HTTPS traffic

Dmitry Kurochkin dmitry.kurochkin at measurement-factory.com
Mon Feb 4 16:15:24 UTC 2013


Hi Jacky.

unjc email <unjc.email at gmail.com> writes:

> Is it possible to configure a workload so that it could generate
> constant HTTP load with fixed number of robots while ramping HTTPS
> load with increasing HTTPS robots?   Please kindly provide a workload
> sample of how to do this.
>

I am afraid this is not possible at the moment.  The Robot population is
controlled by populus factor.  When populus factor increases, randomly
selected Robots (from all available non-running Robots) are started.
When populus factor decreases, randomly selected running Robots are
stopped.

> For example, 200 robots are dedicated to generate HTTP requests
> constantly, acting as like background traffic; at the same time,
> another group of robots is ramping from 1 to 100 that is responsible
> for generating HTTPS requests.
>

You may be able to achieve this by running separate Polygraph client
processes with different workloads: one would simulate a fixed number of
HTTP Robots, another would simulate HTTPS Robots.  It is tricky to run a
single test with different workloads, you can easily shoot yourself in
the foot.

To properly support this feature, I guess we would need to add some sort
of "agent groups" each with a separate populus factor.  At the moment,
we do not plan to implement it.  You are welcome to sponsor development
of this feature!

Regards,
  Dmitry

>
>
> Thanks and greatly appreciate,
> Jacky
>
> On Fri, Dec 21, 2012 at 7:45 PM, Alex Rousskov
> <rousskov at measurement-factory.com> wrote:
>> On 12/21/2012 01:38 PM, unjc email wrote:
>>
>>> I have already set up a workload that generates mixed http/https
>>> traffic.  Since there is an issue with the https proxy, the http
>>> traffic is heavily affected because the same robots are responsible
>>> for both types of traffic.
>>
>> Just FYI: This is, in part, a side-effect of your best-effort workload.
>> In constant-pressure workloads (Robot.req_rate is defined), individual
>> Robot transactions may share open connection limits but not much else,
>> and so SSL proxy problems do not decrease HTTP traffic rate.
>>
>> Please be extra careful with best-effort workloads as they often produce
>> misleading results.
>>
>>
>>> Would any of you please advise how I could
>>> configure two kinds of robots (one for http and one for https) binding
>>> to two different loop-back IP pools?
>>
>> Just define and use to Robot objects. You already have two Server
>> objects. You can do the same with Robots.
>>
>> If you want to reduce PGL code duplication, you can use this trick:
>>
>>     Robot rCommon = {
>>         ... settings common to both robots ...
>>     };
>>
>>     Robot rSecure = rCommon;
>>     rSecure = {
>>         ... settings specific to the SSL robot ...
>>     };
>>
>>     Robot rPlain = rCommon;
>>     rPlain = {
>>         ... settings specific to the HTTP robot ...
>>     };
>>
>>     // use both robots after finalizing all their details
>>     use(rSecure, rPlain);
>>
>>
>>
>>> I understand it will probably not be possible to distribute the load
>>> through using Robot's origins (origins = [M1.names, M2.names: 10%];)
>>> anymore.  I assume I could try to dedicate different number of robots
>>> for each robot type.
>>
>> Yes, but you can apply a similar trick to robot addresses instead of
>> origin addresses:
>>
>>     // compute addresses for all robots
>>     theBench.client_side.addresses =
>>         robotAddrs(authAddrScheme, theBench);
>>
>>     // randomly split computed addresses across two robot categories
>>     [ rSecure.addresses: 10%, rPlain.addresses ] =
>>         theBench.client_side.addresses;
>>
>>
>> HTH,
>>
>> Alex.
>>
>>
>>
>>> Bench theBench = {
>>>       peak_req_rate = 1000/sec;
>>>       client_side = {
>>>               hosts = ['192.168.128.36','192.168.128.37'];
>>>               addr_space = ['lo::172.1.2-250.20-30'];
>>>               max_host_load = theBench.peak_req_rate/count(client_side.hosts);
>>>               max_agent_load = theBench.peak_req_rate/totalRobots;
>>>       };
>>>       server_side = {
>>>               hosts = ['192.168.102.206','192.168.102.207'];
>>>               max_host_load = theBench.peak_req_rate;
>>>               max_agent_load = theBench.peak_req_rate;
>>>       };
>>> };
>>>
>>> Server S1 = {
>>>       kind = "S101";
>>>       contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent:
>>> 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther];
>>>       direct_access = contents;
>>>       addresses = M1.addresses;
>>>       http_versions = ["1.0"];
>>> };
>>>
>>> Server S2 = {
>>>       kind = "S101";
>>>       contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent:
>>> 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther];
>>>       direct_access = contents;
>>>       SslWrap wrap1 = {
>>>               ssl_config_file = "/tmp/ssl.conf";
>>>               protocols = ["any"];
>>>               ciphers = ["ALL:HIGH": 100%];
>>>               rsa_key_sizes = [1024bit];
>>>               session_resumption = 40%;
>>>               session_cache = 100;
>>>       };
>>>       ssl_wraps = [wrap1];
>>>       addresses = M2.addresses;
>>>       http_versions = ["1.0"];
>>> };
>>>
>>> Robot R = {
>>>       kind = "R101";
>>>       pop_model = {
>>>               pop_distr = popUnif();
>>>       };
>>>       recurrence = 50%;
>>>       req_rate = undef();
>>>       origins = [M1.names, M2.names: 10%];
>>>       credentials = select(totalMemberSpace, totalRobots);
>>>       SslWrap wrap1 = {
>>>               ssl_config_file = "/tmp/ssl.conf";
>>>               protocols = ["any"];
>>>               ciphers = ["ALL:HIGH": 100%];
>>>               rsa_key_sizes = [1024bit];
>>>               session_resumption = 40%;
>>>               session_cache = 100;
>>>       };
>>>       ssl_wraps = [wrap1];
>>>       addresses = robotAddrs(authAddrScheme, theBench);
>>>       pconn_use_lmt = const(2147483647);
>>>       idle_pconn_tout = idleConnectionTimeout;
>>>       open_conn_lmt = maxConnPerRobot;
>>>       http_versions = ["1.0"];
>>> };
>>> _______________________________________________
>>> Users mailing list
>>> Users at web-polygraph.org
>>> http://www.web-polygraph.org/mailman/listinfo/users
>>>
>>
> _______________________________________________
> Users mailing list
> Users at web-polygraph.org
> http://www.web-polygraph.org/mailman/listinfo/users



More information about the Users mailing list