From dmitry.kurochkin at measurement-factory.com Mon Feb 4 16:15:24 2013 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Mon, 04 Feb 2013 20:15:24 +0400 Subject: Separate Robots for HTTP and HTTPS traffic In-Reply-To: References: <50D502BC.2040809@measurement-factory.com> Message-ID: <87sj5cnjhf.fsf@gmail.com> Hi Jacky. unjc email writes: > Is it possible to configure a workload so that it could generate > constant HTTP load with fixed number of robots while ramping HTTPS > load with increasing HTTPS robots? Please kindly provide a workload > sample of how to do this. > I am afraid this is not possible at the moment. The Robot population is controlled by populus factor. When populus factor increases, randomly selected Robots (from all available non-running Robots) are started. When populus factor decreases, randomly selected running Robots are stopped. > For example, 200 robots are dedicated to generate HTTP requests > constantly, acting as like background traffic; at the same time, > another group of robots is ramping from 1 to 100 that is responsible > for generating HTTPS requests. > You may be able to achieve this by running separate Polygraph client processes with different workloads: one would simulate a fixed number of HTTP Robots, another would simulate HTTPS Robots. It is tricky to run a single test with different workloads, you can easily shoot yourself in the foot. To properly support this feature, I guess we would need to add some sort of "agent groups" each with a separate populus factor. At the moment, we do not plan to implement it. You are welcome to sponsor development of this feature! Regards, Dmitry > > > Thanks and greatly appreciate, > Jacky > > On Fri, Dec 21, 2012 at 7:45 PM, Alex Rousskov > wrote: >> On 12/21/2012 01:38 PM, unjc email wrote: >> >>> I have already set up a workload that generates mixed http/https >>> traffic. Since there is an issue with the https proxy, the http >>> traffic is heavily affected because the same robots are responsible >>> for both types of traffic. >> >> Just FYI: This is, in part, a side-effect of your best-effort workload. >> In constant-pressure workloads (Robot.req_rate is defined), individual >> Robot transactions may share open connection limits but not much else, >> and so SSL proxy problems do not decrease HTTP traffic rate. >> >> Please be extra careful with best-effort workloads as they often produce >> misleading results. >> >> >>> Would any of you please advise how I could >>> configure two kinds of robots (one for http and one for https) binding >>> to two different loop-back IP pools? >> >> Just define and use to Robot objects. You already have two Server >> objects. You can do the same with Robots. >> >> If you want to reduce PGL code duplication, you can use this trick: >> >> Robot rCommon = { >> ... settings common to both robots ... >> }; >> >> Robot rSecure = rCommon; >> rSecure = { >> ... settings specific to the SSL robot ... >> }; >> >> Robot rPlain = rCommon; >> rPlain = { >> ... settings specific to the HTTP robot ... >> }; >> >> // use both robots after finalizing all their details >> use(rSecure, rPlain); >> >> >> >>> I understand it will probably not be possible to distribute the load >>> through using Robot's origins (origins = [M1.names, M2.names: 10%];) >>> anymore. I assume I could try to dedicate different number of robots >>> for each robot type. >> >> Yes, but you can apply a similar trick to robot addresses instead of >> origin addresses: >> >> // compute addresses for all robots >> theBench.client_side.addresses = >> robotAddrs(authAddrScheme, theBench); >> >> // randomly split computed addresses across two robot categories >> [ rSecure.addresses: 10%, rPlain.addresses ] = >> theBench.client_side.addresses; >> >> >> HTH, >> >> Alex. >> >> >> >>> Bench theBench = { >>> peak_req_rate = 1000/sec; >>> client_side = { >>> hosts = ['192.168.128.36','192.168.128.37']; >>> addr_space = ['lo::172.1.2-250.20-30']; >>> max_host_load = theBench.peak_req_rate/count(client_side.hosts); >>> max_agent_load = theBench.peak_req_rate/totalRobots; >>> }; >>> server_side = { >>> hosts = ['192.168.102.206','192.168.102.207']; >>> max_host_load = theBench.peak_req_rate; >>> max_agent_load = theBench.peak_req_rate; >>> }; >>> }; >>> >>> Server S1 = { >>> kind = "S101"; >>> contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: >>> 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther]; >>> direct_access = contents; >>> addresses = M1.addresses; >>> http_versions = ["1.0"]; >>> }; >>> >>> Server S2 = { >>> kind = "S101"; >>> contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: >>> 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther]; >>> direct_access = contents; >>> SslWrap wrap1 = { >>> ssl_config_file = "/tmp/ssl.conf"; >>> protocols = ["any"]; >>> ciphers = ["ALL:HIGH": 100%]; >>> rsa_key_sizes = [1024bit]; >>> session_resumption = 40%; >>> session_cache = 100; >>> }; >>> ssl_wraps = [wrap1]; >>> addresses = M2.addresses; >>> http_versions = ["1.0"]; >>> }; >>> >>> Robot R = { >>> kind = "R101"; >>> pop_model = { >>> pop_distr = popUnif(); >>> }; >>> recurrence = 50%; >>> req_rate = undef(); >>> origins = [M1.names, M2.names: 10%]; >>> credentials = select(totalMemberSpace, totalRobots); >>> SslWrap wrap1 = { >>> ssl_config_file = "/tmp/ssl.conf"; >>> protocols = ["any"]; >>> ciphers = ["ALL:HIGH": 100%]; >>> rsa_key_sizes = [1024bit]; >>> session_resumption = 40%; >>> session_cache = 100; >>> }; >>> ssl_wraps = [wrap1]; >>> addresses = robotAddrs(authAddrScheme, theBench); >>> pconn_use_lmt = const(2147483647); >>> idle_pconn_tout = idleConnectionTimeout; >>> open_conn_lmt = maxConnPerRobot; >>> http_versions = ["1.0"]; >>> }; >>> _______________________________________________ >>> Users mailing list >>> Users at web-polygraph.org >>> http://www.web-polygraph.org/mailman/listinfo/users >>> >> > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users From unjc.email at gmail.com Tue Feb 5 22:55:20 2013 From: unjc.email at gmail.com (unjc email) Date: Tue, 5 Feb 2013 17:55:20 -0500 Subject: Sending multiple requests in single SSL connection In-Reply-To: <87bodr7xrx.fsf@gmail.com> References: <87bodr7xrx.fsf@gmail.com> Message-ID: Hi Dmitry, As mentioned I have specified a list of domains for HTTPS requests, do WP robots send few requests against the same host before going to the next one? AddrMap M2 = { names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... Request #1: https://google.com:9191/.../t05/_00000002.html Request #2: https://facebook.com:9191/.../t05/_00000003.html Request #3: https://youtube.com:9191/.../t05/_00000004.html If the robots sends HTTPS requests according to the sequence specified in address-map (like the example shown above), then the SSL connection would be terminated before sending new request to the next host, is it correct? If I really want to send multiple HTTPS requests via the same SSL connection, do I need to modify the address-map like below? AddrMap M2 = { names = ['google.com:9191','google.com:9191','google.com:9191','google.com:9191','facebook.com:9191','facebook.com:9191','facebook.com:9191','youtube.com:9191'... Thanks, Jacky On Tue, Dec 18, 2012 at 6:20 PM, Dmitry Kurochkin wrote: > Hi Jacky. > > unjc email writes: > >> Hello there, >> >> I need some help in configuring SSL session. The following is what I >> have configured for the robot. I want to configure the client >> workload to send three or four requests per SSL connection. With the >> current setting, I found each HTTPS request has its own SSL connection >> and it is closed upon receiving the requested object. Please advise >> the correct setting to configure robots to make multiple requests in a >> single SSL connection. >> > > Robot config looks good. Did you set pconn_use_lmt for Server? > >> As you see I have set two domain lists for the clients, one set is for >> HTTP requests and the other set for HTTPS requests. They are all >> unique domains. Would there be a problem for robots to reuse SSL >> connections for requesting different objects fromthe same site/domain? >> > > No. > > Regards, > Dmitry > >> Robot R = { >> kind = "R101"; >> pop_model = { >> pop_distr = popUnif(); >> }; >> recurrence = 50%; >> req_rate = undef(); >> origins = [M1.names, M2.names: 10%]; >> credentials = select(totalMemberSpace, totalRobots); >> SslWrap wrap1 = { >> ssl_config_file = "/tmp/ssl.conf"; >> protocols = ["any"]; >> ciphers = ["ALL:HIGH": 100%]; >> rsa_key_sizes = [1024bit]; >> session_resumption = 40%; >> session_cache = 100; >> }; >> ssl_wraps = [wrap1]; >> addresses = robotAddrs(authAddrScheme, theBench); >> pconn_use_lmt = const(2147483647); >> idle_pconn_tout = idleConnectionTimeout; >> open_conn_lmt = maxConnPerRobot; >> http_versions = ["1.0"]; >> }; >> >> AddrMap M2 = { >> names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'... >> >> AddrMap M2 = { >> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >> >> >> >> >> >> Thank you very much, >> Jacky >> _______________________________________________ >> Users mailing list >> Users at web-polygraph.org >> http://www.web-polygraph.org/mailman/listinfo/users From rousskov at measurement-factory.com Wed Feb 6 01:06:31 2013 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Tue, 05 Feb 2013 18:06:31 -0700 Subject: Sending multiple requests in single SSL connection In-Reply-To: References: <87bodr7xrx.fsf@gmail.com> Message-ID: <5111AC97.7040407@measurement-factory.com> On 02/05/2013 03:55 PM, unjc email wrote: > As mentioned I have specified a list of domains for HTTPS requests, do > WP robots send few requests against the same host before going to the > next one? > > AddrMap M2 = { > names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... > > Request #1: https://google.com:9191/.../t05/_00000002.html > Request #2: https://facebook.com:9191/.../t05/_00000003.html > Request #3: https://youtube.com:9191/.../t05/_00000004.html > > If the robots sends HTTPS requests according to the sequence specified > in address-map (like the example shown above), Robots select a random server from the Robot.origins array. If you prefer to control the order of requests, you can tell a robot to replay a trace file. > then the SSL connection > would be terminated before sending new request to the next host, is it > correct? Robots close connections based on configured timeouts, robot connection limits, HTTP message properties, next HTTP hop decisions, and various errors. I believe that is true for both plain and encrypted connections. If a robot has to switch from server S1 to server S2 then the persistent connection to S1 (if any) may be placed in the pool of idle persistent connections, to be reused when the same robot decides to revisit S1 again (unless it has been purged from the pool due to timeout, connection limit, or disconnect). > If I really want to send multiple HTTPS requests via the > same SSL connection, do I need to modify the address-map like below? > > AddrMap M2 = { > names = ['google.com:9191','google.com:9191','google.com:9191','google.com:9191','facebook.com:9191','facebook.com:9191','facebook.com:9191','youtube.com:9191'... No. Address maps and SSL encryption are not directly related to HTTP persistent connection reuse. If your focus is on getting persistent connections to work, you need to set pconn_use_lmt and idle_pconn_tout options on both robot and server side of the test. If possible, I recommend getting that to work without SSL first (just to keep things simpler) and then enabling SSL. Also, I would disable open_conn_lmt to start with and then enable it when everything is working. Finally, I would start with a single robot to make triage easier. HTH, Alex. > On Tue, Dec 18, 2012 at 6:20 PM, Dmitry Kurochkin wrote: >> Hi Jacky. >> >> unjc email writes: >> >>> Hello there, >>> >>> I need some help in configuring SSL session. The following is what I >>> have configured for the robot. I want to configure the client >>> workload to send three or four requests per SSL connection. With the >>> current setting, I found each HTTPS request has its own SSL connection >>> and it is closed upon receiving the requested object. Please advise >>> the correct setting to configure robots to make multiple requests in a >>> single SSL connection. >>> >> >> Robot config looks good. Did you set pconn_use_lmt for Server? >> >>> As you see I have set two domain lists for the clients, one set is for >>> HTTP requests and the other set for HTTPS requests. They are all >>> unique domains. Would there be a problem for robots to reuse SSL >>> connections for requesting different objects fromthe same site/domain? >>> >> >> No. >> >> Regards, >> Dmitry >> >>> Robot R = { >>> kind = "R101"; >>> pop_model = { >>> pop_distr = popUnif(); >>> }; >>> recurrence = 50%; >>> req_rate = undef(); >>> origins = [M1.names, M2.names: 10%]; >>> credentials = select(totalMemberSpace, totalRobots); >>> SslWrap wrap1 = { >>> ssl_config_file = "/tmp/ssl.conf"; >>> protocols = ["any"]; >>> ciphers = ["ALL:HIGH": 100%]; >>> rsa_key_sizes = [1024bit]; >>> session_resumption = 40%; >>> session_cache = 100; >>> }; >>> ssl_wraps = [wrap1]; >>> addresses = robotAddrs(authAddrScheme, theBench); >>> pconn_use_lmt = const(2147483647); >>> idle_pconn_tout = idleConnectionTimeout; >>> open_conn_lmt = maxConnPerRobot; >>> http_versions = ["1.0"]; >>> }; >>> >>> AddrMap M2 = { >>> names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'... >>> >>> AddrMap M2 = { >>> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >>> >>> >>> >>> >>> >>> Thank you very much, >>> Jacky >>> _______________________________________________ >>> Users mailing list >>> Users at web-polygraph.org >>> http://www.web-polygraph.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users > From unjc.email at gmail.com Thu Feb 7 20:14:52 2013 From: unjc.email at gmail.com (unjc email) Date: Thu, 7 Feb 2013 15:14:52 -0500 Subject: Sending multiple requests in single SSL connection In-Reply-To: <5111AC97.7040407@measurement-factory.com> References: <87bodr7xrx.fsf@gmail.com> <5111AC97.7040407@measurement-factory.com> Message-ID: Hi Alex, I am following what you have advised and work on a normal HTTP load first. I am able to get the persistent connections working - when I examine the tcpdump, I see there are multiple HTTP requests being sent by a robot before the connection is closed. However, the HTTP requests found in a single TCP stream are addressed for different hosts (google.com, yahoo.com... ). What is the trick to make robots to send multiple requests for the same host (e.g. google.com) per persistent connection so that the persistent connections are domain specific? Thanks, Jacky On Tue, Feb 5, 2013 at 8:06 PM, Alex Rousskov wrote: > On 02/05/2013 03:55 PM, unjc email wrote: > >> As mentioned I have specified a list of domains for HTTPS requests, do >> WP robots send few requests against the same host before going to the >> next one? >> >> AddrMap M2 = { >> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >> >> Request #1: https://google.com:9191/.../t05/_00000002.html >> Request #2: https://facebook.com:9191/.../t05/_00000003.html >> Request #3: https://youtube.com:9191/.../t05/_00000004.html >> >> If the robots sends HTTPS requests according to the sequence specified >> in address-map (like the example shown above), > > Robots select a random server from the Robot.origins array. If you > prefer to control the order of requests, you can tell a robot to replay > a trace file. > > >> then the SSL connection >> would be terminated before sending new request to the next host, is it >> correct? > > Robots close connections based on configured timeouts, robot connection > limits, HTTP message properties, next HTTP hop decisions, and various > errors. I believe that is true for both plain and encrypted connections. > > If a robot has to switch from server S1 to server S2 then the persistent > connection to S1 (if any) may be placed in the pool of idle persistent > connections, to be reused when the same robot decides to revisit S1 > again (unless it has been purged from the pool due to timeout, > connection limit, or disconnect). > > >> If I really want to send multiple HTTPS requests via the >> same SSL connection, do I need to modify the address-map like below? >> >> AddrMap M2 = { >> names = ['google.com:9191','google.com:9191','google.com:9191','google.com:9191','facebook.com:9191','facebook.com:9191','facebook.com:9191','youtube.com:9191'... > > > No. Address maps and SSL encryption are not directly related to HTTP > persistent connection reuse. If your focus is on getting persistent > connections to work, you need to set pconn_use_lmt and idle_pconn_tout > options on both robot and server side of the test. If possible, I > recommend getting that to work without SSL first (just to keep things > simpler) and then enabling SSL. > > Also, I would disable open_conn_lmt to start with and then enable it > when everything is working. > > Finally, I would start with a single robot to make triage easier. > > > HTH, > > Alex. > > > >> On Tue, Dec 18, 2012 at 6:20 PM, Dmitry Kurochkin wrote: >>> Hi Jacky. >>> >>> unjc email writes: >>> >>>> Hello there, >>>> >>>> I need some help in configuring SSL session. The following is what I >>>> have configured for the robot. I want to configure the client >>>> workload to send three or four requests per SSL connection. With the >>>> current setting, I found each HTTPS request has its own SSL connection >>>> and it is closed upon receiving the requested object. Please advise >>>> the correct setting to configure robots to make multiple requests in a >>>> single SSL connection. >>>> >>> >>> Robot config looks good. Did you set pconn_use_lmt for Server? >>> >>>> As you see I have set two domain lists for the clients, one set is for >>>> HTTP requests and the other set for HTTPS requests. They are all >>>> unique domains. Would there be a problem for robots to reuse SSL >>>> connections for requesting different objects fromthe same site/domain? >>>> >>> >>> No. >>> >>> Regards, >>> Dmitry >>> >>>> Robot R = { >>>> kind = "R101"; >>>> pop_model = { >>>> pop_distr = popUnif(); >>>> }; >>>> recurrence = 50%; >>>> req_rate = undef(); >>>> origins = [M1.names, M2.names: 10%]; >>>> credentials = select(totalMemberSpace, totalRobots); >>>> SslWrap wrap1 = { >>>> ssl_config_file = "/tmp/ssl.conf"; >>>> protocols = ["any"]; >>>> ciphers = ["ALL:HIGH": 100%]; >>>> rsa_key_sizes = [1024bit]; >>>> session_resumption = 40%; >>>> session_cache = 100; >>>> }; >>>> ssl_wraps = [wrap1]; >>>> addresses = robotAddrs(authAddrScheme, theBench); >>>> pconn_use_lmt = const(2147483647); >>>> idle_pconn_tout = idleConnectionTimeout; >>>> open_conn_lmt = maxConnPerRobot; >>>> http_versions = ["1.0"]; >>>> }; >>>> >>>> AddrMap M2 = { >>>> names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'... >>>> >>>> AddrMap M2 = { >>>> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >>>> >>>> >>>> >>>> >>>> >>>> Thank you very much, >>>> Jacky >>>> _______________________________________________ >>>> Users mailing list >>>> Users at web-polygraph.org >>>> http://www.web-polygraph.org/mailman/listinfo/users >> _______________________________________________ >> Users mailing list >> Users at web-polygraph.org >> http://www.web-polygraph.org/mailman/listinfo/users >> > From rousskov at measurement-factory.com Thu Feb 7 21:45:26 2013 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Thu, 07 Feb 2013 14:45:26 -0700 Subject: Sending multiple requests in single SSL connection In-Reply-To: References: <87bodr7xrx.fsf@gmail.com> <5111AC97.7040407@measurement-factory.com> Message-ID: <51142076.6000504@measurement-factory.com> On 02/07/2013 01:14 PM, unjc email wrote: > However, the HTTP > requests found in a single TCP stream are addressed for different > hosts (google.com, yahoo.com... ). What is the trick to make robots > to send multiple requests for the same host (e.g. google.com) per > persistent connection so that the persistent connections are domain > specific? In HTTP, persistent connections are maintained by clients on the "next HTTP hop" basis. If all robots are talking to a single forward proxy, then all connections will have the same next HTTP hop. Is that what is happening in your setup? If yes, there is no knob to change that robot behavior (and such a change would be unrealistic because real browsers behave similar to Polygraph robots in this area). However, you might be able to force robots to open more connections if you list many identical proxy addresses in Robot::http_proxies field. This hack is untested and the connections would still not be specific to origin servers. If you do not have a forward proxy configured, then robots should not use a connection to origin server A.com for sending requests to origin server B.com. If that happens, I encourage you to report it as a Polygraph bug on Launchpad. If you do that, please attach the workload file and pcap package capture to your bug report. Thank you, Alex. > On Tue, Feb 5, 2013 at 8:06 PM, Alex Rousskov > wrote: >> On 02/05/2013 03:55 PM, unjc email wrote: >> >>> As mentioned I have specified a list of domains for HTTPS requests, do >>> WP robots send few requests against the same host before going to the >>> next one? >>> >>> AddrMap M2 = { >>> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >>> >>> Request #1: https://google.com:9191/.../t05/_00000002.html >>> Request #2: https://facebook.com:9191/.../t05/_00000003.html >>> Request #3: https://youtube.com:9191/.../t05/_00000004.html >>> >>> If the robots sends HTTPS requests according to the sequence specified >>> in address-map (like the example shown above), >> >> Robots select a random server from the Robot.origins array. If you >> prefer to control the order of requests, you can tell a robot to replay >> a trace file. >> >> >>> then the SSL connection >>> would be terminated before sending new request to the next host, is it >>> correct? >> >> Robots close connections based on configured timeouts, robot connection >> limits, HTTP message properties, next HTTP hop decisions, and various >> errors. I believe that is true for both plain and encrypted connections. >> >> If a robot has to switch from server S1 to server S2 then the persistent >> connection to S1 (if any) may be placed in the pool of idle persistent >> connections, to be reused when the same robot decides to revisit S1 >> again (unless it has been purged from the pool due to timeout, >> connection limit, or disconnect). >> >> >>> If I really want to send multiple HTTPS requests via the >>> same SSL connection, do I need to modify the address-map like below? >>> >>> AddrMap M2 = { >>> names = ['google.com:9191','google.com:9191','google.com:9191','google.com:9191','facebook.com:9191','facebook.com:9191','facebook.com:9191','youtube.com:9191'... >> >> >> No. Address maps and SSL encryption are not directly related to HTTP >> persistent connection reuse. If your focus is on getting persistent >> connections to work, you need to set pconn_use_lmt and idle_pconn_tout >> options on both robot and server side of the test. If possible, I >> recommend getting that to work without SSL first (just to keep things >> simpler) and then enabling SSL. >> >> Also, I would disable open_conn_lmt to start with and then enable it >> when everything is working. >> >> Finally, I would start with a single robot to make triage easier. >> >> >> HTH, >> >> Alex. >> >> >> >>> On Tue, Dec 18, 2012 at 6:20 PM, Dmitry Kurochkin wrote: >>>> Hi Jacky. >>>> >>>> unjc email writes: >>>> >>>>> Hello there, >>>>> >>>>> I need some help in configuring SSL session. The following is what I >>>>> have configured for the robot. I want to configure the client >>>>> workload to send three or four requests per SSL connection. With the >>>>> current setting, I found each HTTPS request has its own SSL connection >>>>> and it is closed upon receiving the requested object. Please advise >>>>> the correct setting to configure robots to make multiple requests in a >>>>> single SSL connection. >>>>> >>>> >>>> Robot config looks good. Did you set pconn_use_lmt for Server? >>>> >>>>> As you see I have set two domain lists for the clients, one set is for >>>>> HTTP requests and the other set for HTTPS requests. They are all >>>>> unique domains. Would there be a problem for robots to reuse SSL >>>>> connections for requesting different objects fromthe same site/domain? >>>>> >>>> >>>> No. >>>> >>>> Regards, >>>> Dmitry >>>> >>>>> Robot R = { >>>>> kind = "R101"; >>>>> pop_model = { >>>>> pop_distr = popUnif(); >>>>> }; >>>>> recurrence = 50%; >>>>> req_rate = undef(); >>>>> origins = [M1.names, M2.names: 10%]; >>>>> credentials = select(totalMemberSpace, totalRobots); >>>>> SslWrap wrap1 = { >>>>> ssl_config_file = "/tmp/ssl.conf"; >>>>> protocols = ["any"]; >>>>> ciphers = ["ALL:HIGH": 100%]; >>>>> rsa_key_sizes = [1024bit]; >>>>> session_resumption = 40%; >>>>> session_cache = 100; >>>>> }; >>>>> ssl_wraps = [wrap1]; >>>>> addresses = robotAddrs(authAddrScheme, theBench); >>>>> pconn_use_lmt = const(2147483647); >>>>> idle_pconn_tout = idleConnectionTimeout; >>>>> open_conn_lmt = maxConnPerRobot; >>>>> http_versions = ["1.0"]; >>>>> }; >>>>> >>>>> AddrMap M2 = { >>>>> names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'... >>>>> >>>>> AddrMap M2 = { >>>>> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Thank you very much, >>>>> Jacky >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at web-polygraph.org >>>>> http://www.web-polygraph.org/mailman/listinfo/users >>> _______________________________________________ >>> Users mailing list >>> Users at web-polygraph.org >>> http://www.web-polygraph.org/mailman/listinfo/users >>> >> From rousskov at measurement-factory.com Thu Feb 7 22:51:31 2013 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Thu, 07 Feb 2013 15:51:31 -0700 Subject: Sending multiple requests in single SSL connection In-Reply-To: References: <87bodr7xrx.fsf@gmail.com> <5111AC97.7040407@measurement-factory.com> <51142076.6000504@measurement-factory.com> Message-ID: <51142FF3.1040302@measurement-factory.com> On 02/07/2013 03:23 PM, unjc email wrote: > I believe my testing environment falls in the first case > (WP Clients > F5 (single entry point) > squid (> poisoned DNS) > WP > Servers). I agree with you that the way the Robots handles the HTTP > traffic now is fine. However, would the same behavior applies to > HTTPS traffic? When sending HTTPS requests through a forward proxy, the next HTTP hop is the origin server and not the proxy, so you should see [encrypted] server-specific persistent connections (tunneled on TCP level through the proxy). In that case, from robot point of view, the proxy is the next TCP hop, but not the next HTTP hop. > Does Robot's "open_conn_lmt" value need to be equal to > the size of domain list in order to support domain-specific persistent > connections? Say, I have 500 robots and 5000 domains in my > address-map, it doesn't seem making sense for each robot to keep an > SSL connection open for each domain. With 1 (or a finite value) in > Robot's "open_conn_lmt", what is the proper way to set up the workload > for avoiding closing SSL connection after sending only one HTTPS > request? Please advise. Open connection limit and idle connection pools are robot properties. Individual robots, just like individual browsers or individual proxies, do not share those limits and pools. If you want two robots to share connections, you probably want one robot (that is twice as fast) instead. The number of open connections is the sum of idle persistent connections and concurrent connections. Thus, it is not possible to predict that number based on the number of origin servers alone. Factors such as request rate and response time will affect it. Since a Polygraph robot may open several concurrent connections to the same host, your open connection limit may have to be larger than the number of active concurrent connections plus the number of origin servers, but the exact number is difficult to predict because idle connections are managed using a LIFO queue (IIRC) and have timeouts, both of which are difficult to correlate with your "after sending one request" criteria. I do not what you end goal is, but perhaps you can approach it from the opposite end? Remove open connections limit, monitor traffic to measure the number of concurrent connections across all robots (active and idle; see last column of console i-phase lines), and only then set the robot limit to the maximum your system can comfortably support. If you see too many open connections but cannot limit them, perhaps you need fewer robots (or more Polygraph processes or more bench drones)? HTH, Alex. > On Thu, Feb 7, 2013 at 4:45 PM, Alex Rousskov wrote: >> On 02/07/2013 01:14 PM, unjc email wrote: >> >>> However, the HTTP >>> requests found in a single TCP stream are addressed for different >>> hosts (google.com, yahoo.com... ). What is the trick to make robots >>> to send multiple requests for the same host (e.g. google.com) per >>> persistent connection so that the persistent connections are domain >>> specific? >> >> In HTTP, persistent connections are maintained by clients on the "next >> HTTP hop" basis. If all robots are talking to a single forward proxy, >> then all connections will have the same next HTTP hop. Is that what is >> happening in your setup? >> >> If yes, there is no knob to change that robot behavior (and such a >> change would be unrealistic because real browsers behave similar to >> Polygraph robots in this area). However, you might be able to force >> robots to open more connections if you list many identical proxy >> addresses in Robot::http_proxies field. This hack is untested and the >> connections would still not be specific to origin servers. >> >> If you do not have a forward proxy configured, then robots should not >> use a connection to origin server A.com for sending requests to origin >> server B.com. If that happens, I encourage you to report it as a >> Polygraph bug on Launchpad. If you do that, please attach the workload >> file and pcap package capture to your bug report. >> >> >> Thank you, >> >> Alex. >> >> >> >>> On Tue, Feb 5, 2013 at 8:06 PM, Alex Rousskov >>> wrote: >>>> On 02/05/2013 03:55 PM, unjc email wrote: >>>> >>>>> As mentioned I have specified a list of domains for HTTPS requests, do >>>>> WP robots send few requests against the same host before going to the >>>>> next one? >>>>> >>>>> AddrMap M2 = { >>>>> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >>>>> >>>>> Request #1: https://google.com:9191/.../t05/_00000002.html >>>>> Request #2: https://facebook.com:9191/.../t05/_00000003.html >>>>> Request #3: https://youtube.com:9191/.../t05/_00000004.html >>>>> >>>>> If the robots sends HTTPS requests according to the sequence specified >>>>> in address-map (like the example shown above), >>>> >>>> Robots select a random server from the Robot.origins array. If you >>>> prefer to control the order of requests, you can tell a robot to replay >>>> a trace file. >>>> >>>> >>>>> then the SSL connection >>>>> would be terminated before sending new request to the next host, is it >>>>> correct? >>>> >>>> Robots close connections based on configured timeouts, robot connection >>>> limits, HTTP message properties, next HTTP hop decisions, and various >>>> errors. I believe that is true for both plain and encrypted connections. >>>> >>>> If a robot has to switch from server S1 to server S2 then the persistent >>>> connection to S1 (if any) may be placed in the pool of idle persistent >>>> connections, to be reused when the same robot decides to revisit S1 >>>> again (unless it has been purged from the pool due to timeout, >>>> connection limit, or disconnect). >>>> >>>> >>>>> If I really want to send multiple HTTPS requests via the >>>>> same SSL connection, do I need to modify the address-map like below? >>>>> >>>>> AddrMap M2 = { >>>>> names = ['google.com:9191','google.com:9191','google.com:9191','google.com:9191','facebook.com:9191','facebook.com:9191','facebook.com:9191','youtube.com:9191'... >>>> >>>> >>>> No. Address maps and SSL encryption are not directly related to HTTP >>>> persistent connection reuse. If your focus is on getting persistent >>>> connections to work, you need to set pconn_use_lmt and idle_pconn_tout >>>> options on both robot and server side of the test. If possible, I >>>> recommend getting that to work without SSL first (just to keep things >>>> simpler) and then enabling SSL. >>>> >>>> Also, I would disable open_conn_lmt to start with and then enable it >>>> when everything is working. >>>> >>>> Finally, I would start with a single robot to make triage easier. >>>> >>>> >>>> HTH, >>>> >>>> Alex. >>>> >>>> >>>> >>>>> On Tue, Dec 18, 2012 at 6:20 PM, Dmitry Kurochkin wrote: >>>>>> Hi Jacky. >>>>>> >>>>>> unjc email writes: >>>>>> >>>>>>> Hello there, >>>>>>> >>>>>>> I need some help in configuring SSL session. The following is what I >>>>>>> have configured for the robot. I want to configure the client >>>>>>> workload to send three or four requests per SSL connection. With the >>>>>>> current setting, I found each HTTPS request has its own SSL connection >>>>>>> and it is closed upon receiving the requested object. Please advise >>>>>>> the correct setting to configure robots to make multiple requests in a >>>>>>> single SSL connection. >>>>>>> >>>>>> >>>>>> Robot config looks good. Did you set pconn_use_lmt for Server? >>>>>> >>>>>>> As you see I have set two domain lists for the clients, one set is for >>>>>>> HTTP requests and the other set for HTTPS requests. They are all >>>>>>> unique domains. Would there be a problem for robots to reuse SSL >>>>>>> connections for requesting different objects fromthe same site/domain? >>>>>>> >>>>>> >>>>>> No. >>>>>> >>>>>> Regards, >>>>>> Dmitry >>>>>> >>>>>>> Robot R = { >>>>>>> kind = "R101"; >>>>>>> pop_model = { >>>>>>> pop_distr = popUnif(); >>>>>>> }; >>>>>>> recurrence = 50%; >>>>>>> req_rate = undef(); >>>>>>> origins = [M1.names, M2.names: 10%]; >>>>>>> credentials = select(totalMemberSpace, totalRobots); >>>>>>> SslWrap wrap1 = { >>>>>>> ssl_config_file = "/tmp/ssl.conf"; >>>>>>> protocols = ["any"]; >>>>>>> ciphers = ["ALL:HIGH": 100%]; >>>>>>> rsa_key_sizes = [1024bit]; >>>>>>> session_resumption = 40%; >>>>>>> session_cache = 100; >>>>>>> }; >>>>>>> ssl_wraps = [wrap1]; >>>>>>> addresses = robotAddrs(authAddrScheme, theBench); >>>>>>> pconn_use_lmt = const(2147483647); >>>>>>> idle_pconn_tout = idleConnectionTimeout; >>>>>>> open_conn_lmt = maxConnPerRobot; >>>>>>> http_versions = ["1.0"]; >>>>>>> }; >>>>>>> >>>>>>> AddrMap M2 = { >>>>>>> names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'... >>>>>>> >>>>>>> AddrMap M2 = { >>>>>>> names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Thank you very much, >>>>>>> Jacky >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at web-polygraph.org >>>>>>> http://www.web-polygraph.org/mailman/listinfo/users >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at web-polygraph.org >>>>> http://www.web-polygraph.org/mailman/listinfo/users >>>>> >>>> >>