Sending multiple requests in single SSL connection

Alex Rousskov rousskov at measurement-factory.com
Thu Feb 7 22:51:31 UTC 2013


On 02/07/2013 03:23 PM, unjc email wrote:

> I believe my testing environment falls in the first case
> (WP Clients > F5 (single entry point) > squid (> poisoned DNS) > WP
> Servers).  I agree with you that the way the Robots handles the HTTP
> traffic now is fine.  However, would the same behavior applies to
> HTTPS traffic?

When sending HTTPS requests through a forward proxy, the next HTTP hop
is the origin server and not the proxy, so you should see [encrypted]
server-specific persistent connections (tunneled on TCP level through
the proxy). In that case, from robot point of view, the proxy is the
next TCP hop, but not the next HTTP hop.


> Does Robot's "open_conn_lmt" value need to be equal to
> the size of domain list in order to support domain-specific persistent
> connections?  Say, I have 500 robots and 5000 domains in my
> address-map, it doesn't seem making sense for each robot to keep an
> SSL connection open for each domain.  With 1 (or a finite value) in
> Robot's "open_conn_lmt", what is the proper way to set up the workload
> for avoiding closing SSL connection after sending only one HTTPS
> request?   Please advise.

Open connection limit and idle connection pools are robot properties.
Individual robots, just like individual browsers or individual proxies,
do not share those limits and pools. If you want two robots to share
connections, you probably want one robot (that is twice as fast) instead.


The number of open connections is the sum of idle persistent connections
and concurrent connections. Thus, it is not possible to predict that
number based on the number of origin servers alone. Factors such as
request rate and response time will affect it.

Since a Polygraph robot may open several concurrent connections to the
same host, your open connection limit may have to be larger than the
number of active concurrent connections plus the number of origin
servers, but the exact number is difficult to predict because idle
connections are managed using a LIFO queue (IIRC) and have timeouts,
both of which are difficult to correlate with your "after sending one
request" criteria.


I do not what you end goal is, but perhaps you can approach it from the
opposite end? Remove open connections limit, monitor traffic to measure
the number of concurrent connections across all robots (active and idle;
see last column of console i-phase lines), and only then set the robot
limit to the maximum your system can comfortably support. If you see too
many open connections but cannot limit them, perhaps you need fewer
robots (or more Polygraph processes or more bench drones)?


HTH,

Alex.



> On Thu, Feb 7, 2013 at 4:45 PM, Alex Rousskov wrote:
>> On 02/07/2013 01:14 PM, unjc email wrote:
>>
>>> However, the HTTP
>>> requests found in a single TCP stream are addressed for different
>>> hosts (google.com, yahoo.com... ).  What is the trick to make robots
>>> to send multiple requests for the same host (e.g. google.com) per
>>> persistent connection so that the persistent connections are domain
>>> specific?
>>
>> In HTTP, persistent connections are maintained by clients on the "next
>> HTTP hop" basis. If all robots are talking to a single forward proxy,
>> then all connections will have the same next HTTP hop. Is that what is
>> happening in your setup?
>>
>> If yes, there is no knob to change that robot behavior (and such a
>> change would be unrealistic because real browsers behave similar to
>> Polygraph robots in this area). However, you might be able to force
>> robots to open more connections if you list many identical proxy
>> addresses in Robot::http_proxies field. This hack is untested and the
>> connections would still not be specific to origin servers.
>>
>> If you do not have a forward proxy configured, then robots should not
>> use a connection to origin server A.com for sending requests to origin
>> server B.com. If that happens, I encourage you to report it as a
>> Polygraph bug on Launchpad. If you do that, please attach the workload
>> file and pcap package capture to your bug report.
>>
>>
>> Thank you,
>>
>> Alex.
>>
>>
>>
>>> On Tue, Feb 5, 2013 at 8:06 PM, Alex Rousskov
>>> <rousskov at measurement-factory.com> wrote:
>>>> On 02/05/2013 03:55 PM, unjc email wrote:
>>>>
>>>>> As mentioned I have specified a list of domains for HTTPS requests, do
>>>>> WP robots send few requests against the same host before going to the
>>>>> next one?
>>>>>
>>>>>  AddrMap M2 = {
>>>>>        names = ['google.com:9191','facebook.com:9191','youtube.com:9191'...
>>>>>
>>>>> Request #1: https://google.com:9191/.../t05/_00000002.html
>>>>> Request #2: https://facebook.com:9191/.../t05/_00000003.html
>>>>> Request #3: https://youtube.com:9191/.../t05/_00000004.html
>>>>>
>>>>> If the robots sends HTTPS requests according to the sequence specified
>>>>> in address-map (like the example shown above),
>>>>
>>>> Robots select a random server from the Robot.origins array. If you
>>>> prefer to control the order of requests, you can tell a robot to replay
>>>> a trace file.
>>>>
>>>>
>>>>> then the SSL connection
>>>>> would be terminated before sending new request to the next host, is it
>>>>> correct?
>>>>
>>>> Robots close connections based on configured timeouts, robot connection
>>>> limits, HTTP message properties, next HTTP hop decisions, and various
>>>> errors. I believe that is true for both plain and encrypted connections.
>>>>
>>>> If a robot has to switch from server S1 to server S2 then the persistent
>>>> connection to S1 (if any) may be placed in the pool of idle persistent
>>>> connections, to be reused when the same robot decides to revisit S1
>>>> again (unless it has been purged from the pool due to timeout,
>>>> connection limit, or disconnect).
>>>>
>>>>
>>>>> If I really want to send multiple HTTPS requests via the
>>>>> same SSL connection, do I need to modify the address-map like below?
>>>>>
>>>>>  AddrMap M2 = {
>>>>>        names = ['google.com:9191','google.com:9191','google.com:9191','google.com:9191','facebook.com:9191','facebook.com:9191','facebook.com:9191','youtube.com:9191'...
>>>>
>>>>
>>>> No. Address maps and SSL encryption are not directly related to HTTP
>>>> persistent connection reuse. If your focus is on getting persistent
>>>> connections to work, you need to set pconn_use_lmt and idle_pconn_tout
>>>> options on both robot and server side of the test. If possible, I
>>>> recommend getting that to work without SSL first (just to keep things
>>>> simpler) and then enabling SSL.
>>>>
>>>> Also, I would disable open_conn_lmt to start with and then enable it
>>>> when everything is working.
>>>>
>>>> Finally, I would start with a single robot to make triage easier.
>>>>
>>>>
>>>> HTH,
>>>>
>>>> Alex.
>>>>
>>>>
>>>>
>>>>> On Tue, Dec 18, 2012 at 6:20 PM, Dmitry Kurochkin wrote:
>>>>>> Hi Jacky.
>>>>>>
>>>>>> unjc email <unjc.email at gmail.com> writes:
>>>>>>
>>>>>>> Hello there,
>>>>>>>
>>>>>>> I need some help in configuring SSL session.  The following is what I
>>>>>>> have configured for the robot.  I want to configure the client
>>>>>>> workload to send three or four requests per SSL connection.  With the
>>>>>>> current setting, I found each HTTPS request has its own SSL connection
>>>>>>> and it is closed upon receiving the requested object.  Please advise
>>>>>>> the correct setting to configure robots to make multiple requests in a
>>>>>>> single SSL connection.
>>>>>>>
>>>>>>
>>>>>> Robot config looks good.  Did you set pconn_use_lmt for Server?
>>>>>>
>>>>>>> As you see I have set two domain lists for the clients, one set is for
>>>>>>> HTTP requests and the other set for HTTPS requests.  They are all
>>>>>>> unique domains.  Would there be a problem for robots to reuse SSL
>>>>>>> connections for requesting different objects fromthe same site/domain?
>>>>>>>
>>>>>>
>>>>>> No.
>>>>>>
>>>>>> Regards,
>>>>>>   Dmitry
>>>>>>
>>>>>>> Robot R = {
>>>>>>>       kind = "R101";
>>>>>>>       pop_model = {
>>>>>>>               pop_distr = popUnif();
>>>>>>>       };
>>>>>>>       recurrence = 50%;
>>>>>>>       req_rate = undef();
>>>>>>>       origins = [M1.names, M2.names: 10%];
>>>>>>>       credentials = select(totalMemberSpace, totalRobots);
>>>>>>>       SslWrap wrap1 = {
>>>>>>>               ssl_config_file = "/tmp/ssl.conf";
>>>>>>>               protocols = ["any"];
>>>>>>>               ciphers = ["ALL:HIGH": 100%];
>>>>>>>               rsa_key_sizes = [1024bit];
>>>>>>>               session_resumption = 40%;
>>>>>>>               session_cache = 100;
>>>>>>>       };
>>>>>>>       ssl_wraps = [wrap1];
>>>>>>>       addresses = robotAddrs(authAddrScheme, theBench);
>>>>>>>       pconn_use_lmt = const(2147483647);
>>>>>>>       idle_pconn_tout = idleConnectionTimeout;
>>>>>>>       open_conn_lmt = maxConnPerRobot;
>>>>>>>       http_versions = ["1.0"];
>>>>>>> };
>>>>>>>
>>>>>>> AddrMap M2 = {
>>>>>>>       names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'...
>>>>>>>
>>>>>>> AddrMap M2 = {
>>>>>>>       names = ['google.com:9191','facebook.com:9191','youtube.com:9191'...
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thank you very much,
>>>>>>> Jacky
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at web-polygraph.org
>>>>>>> http://www.web-polygraph.org/mailman/listinfo/users
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at web-polygraph.org
>>>>> http://www.web-polygraph.org/mailman/listinfo/users
>>>>>
>>>>
>>




More information about the Users mailing list