From unjc.email at gmail.com Mon Dec 3 23:29:24 2012 From: unjc.email at gmail.com (unjc email) Date: Mon, 3 Dec 2012 18:29:24 -0500 Subject: Separated stats for http and https traffic In-Reply-To: <8738zrmu4v.fsf@gmail.com> References: <8738zrmu4v.fsf@gmail.com> Message-ID: If I want to modify the code to print the non-SSL stats to the console log as well, please kindly advise which files I should look into. Current Console Log: 008.30| i-Ramping 26171 505.19 97 0.00 2123 9 ssl 9563 80.60 97 0.00 0 7 008.38| i-Ramping 26572 472.80 113 0.00 1963 9 ssl 9763 80.20 113 0.00 0 8 Ideal Output: 008.30| i-Ramping 26171 505.19 97 0.00 2123 9 http 9563 80.60 97 0.00 0 7 ssl 9563 80.60 97 0.00 0 7 008.38| i-Ramping 26572 472.80 113 0.00 1963 9 http 9563 80.60 97 0.00 0 7 ssl 9763 80.20 113 0.00 0 8 I have added a new stat object "theHttpStat" to StatIntvl.cc, but still not seeing the output I want. Please give me some hints how I could make StatCycle.cc to "http" stats as well. Thanks, Jacky On Fri, Nov 30, 2012 at 2:23 AM, Dmitry Kurochkin wrote: > Hi Jacky. > > unjc email writes: > >> Hi there, >> >> Is there any way to show individual statistics for http and https >> traffic in console log? > > No. Console output is not configurable. > >> The default log show https (ssl) stats and >> combined stats only. I need to capture stats of regular HTTP traffic >> for performance comparison as well. Please help. >> > > You should use binary logs for that. We record plain HTTP stats. But > you can calculate it from other stats. Keep in mind that CONNECT > requests contribute to SSL stats. So formula for plain HTTP stats in > polygraph-lx output would be something like (basic - (ssl.rep - > connect)). It may be more complex depending on you workload. > > Regards, > Dmitry > >> >> Thanks, >> Jacky >> _______________________________________________ >> Users mailing list >> Users at web-polygraph.org >> http://www.web-polygraph.org/mailman/listinfo/users From dmitry.kurochkin at measurement-factory.com Wed Dec 5 05:13:37 2012 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Wed, 05 Dec 2012 09:13:37 +0400 Subject: Separated stats for http and https traffic In-Reply-To: References: <8738zrmu4v.fsf@gmail.com> Message-ID: <87a9ttjd32.fsf@gmail.com> Hi Jacky. unjc email writes: > Dmitry, thanks again for your quick reply. I tried polygraph-lx > and polygraph-ltrace, and found some of the useful information from > there. However, I am still in doubt about I could extract the > throughputs and response-times of HTTP and HTTPS traffic throughout > the ramp test. Would you please give me some hints of how I could > record/extract them during the test? You can extract interval stats using polygraph-ltrace, e.g.: $ ltrace --objects basic.rptm.count,basic.rptm.mean,ssl.rep.rptm.mean LOG As I said before, there are no explicit pure HTTP stats. You will have to calculate it from other stats. > I also find the amount of data > extracted (like ssl.rep.rptm) is much less than the console log; is > there an option to change the recording interval so that equivalent > data being logged in binary logs too? > Stats are recorded into the binary logs at the same interval as they are printed on the console. The stats recording interval is controlled by --stats_cycle option and is 5sec by default. Polygraph-ltrace aggregates interval stats with 60sec window by default. You can change the window length using --win_len option. $ ltrace --objects time,basic.rptm.mean,ssl.rep.rptm.mean --win_len 1sec LOG This should give you output similar to the console. You may specify --time_unit 1sec option for relative time. Also, you may want to output interval object for some additional info. Regards, Dmitry > > > Thanks, > Jacky > > On Fri, Nov 30, 2012 at 2:23 AM, Dmitry Kurochkin > wrote: >> Hi Jacky. >> >> unjc email writes: >> >>> Hi there, >>> >>> Is there any way to show individual statistics for http and https >>> traffic in console log? >> >> No. Console output is not configurable. >> >>> The default log show https (ssl) stats and >>> combined stats only. I need to capture stats of regular HTTP traffic >>> for performance comparison as well. Please help. >>> >> >> You should use binary logs for that. We record plain HTTP stats. But >> you can calculate it from other stats. Keep in mind that CONNECT >> requests contribute to SSL stats. So formula for plain HTTP stats in >> polygraph-lx output would be something like (basic - (ssl.rep - >> connect)). It may be more complex depending on you workload. >> >> Regards, >> Dmitry >> >>> >>> Thanks, >>> Jacky >>> _______________________________________________ >>> Users mailing list >>> Users at web-polygraph.org >>> http://www.web-polygraph.org/mailman/listinfo/users From dmitry.kurochkin at measurement-factory.com Wed Dec 5 05:35:08 2012 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Wed, 05 Dec 2012 09:35:08 +0400 Subject: Separated stats for http and https traffic In-Reply-To: References: <8738zrmu4v.fsf@gmail.com> Message-ID: <877goxjc37.fsf@gmail.com> unjc email writes: > If I want to modify the code to print the non-SSL stats to the console > log as well, please kindly advise which files I should look into. > > Current Console Log: > 008.30| i-Ramping 26171 505.19 97 0.00 2123 9 > ssl 9563 80.60 97 0.00 0 7 > 008.38| i-Ramping 26572 472.80 113 0.00 1963 9 > ssl 9763 80.20 113 0.00 0 8 > > Ideal Output: > 008.30| i-Ramping 26171 505.19 97 0.00 2123 9 > http 9563 80.60 97 0.00 0 7 > ssl 9563 80.60 97 0.00 0 7 > 008.38| i-Ramping 26572 472.80 113 0.00 1963 9 > http 9563 80.60 97 0.00 0 7 > ssl 9763 80.20 113 0.00 0 8 > > > I have added a new stat object "theHttpStat" to StatIntvl.cc, but > still not seeing the output I want. Please give me some hints how I > could make StatCycle.cc to "http" stats as well. > I suggest you look at how FTP protocol interval stats (StatIntvlRec::theFtpStat) are handled. That should give you a good idea on what changes you would need to make. I still think you can achieve what you need without changing the code. IMO it would easier for you to calculate pure-HTTP stats from existing stats than add it to Polygraph. Regards, Dmitry > > Thanks, > Jacky > > > On Fri, Nov 30, 2012 at 2:23 AM, Dmitry Kurochkin > wrote: >> Hi Jacky. >> >> unjc email writes: >> >>> Hi there, >>> >>> Is there any way to show individual statistics for http and https >>> traffic in console log? >> >> No. Console output is not configurable. >> >>> The default log show https (ssl) stats and >>> combined stats only. I need to capture stats of regular HTTP traffic >>> for performance comparison as well. Please help. >>> >> >> You should use binary logs for that. We record plain HTTP stats. But >> you can calculate it from other stats. Keep in mind that CONNECT >> requests contribute to SSL stats. So formula for plain HTTP stats in >> polygraph-lx output would be something like (basic - (ssl.rep - >> connect)). It may be more complex depending on you workload. >> >> Regards, >> Dmitry >> >>> >>> Thanks, >>> Jacky >>> _______________________________________________ >>> Users mailing list >>> Users at web-polygraph.org >>> http://www.web-polygraph.org/mailman/listinfo/users From unjc.email at gmail.com Tue Dec 18 23:10:28 2012 From: unjc.email at gmail.com (unjc email) Date: Tue, 18 Dec 2012 18:10:28 -0500 Subject: Sending multiple requests in single SSL connection Message-ID: Hello there, I need some help in configuring SSL session. The following is what I have configured for the robot. I want to configure the client workload to send three or four requests per SSL connection. With the current setting, I found each HTTPS request has its own SSL connection and it is closed upon receiving the requested object. Please advise the correct setting to configure robots to make multiple requests in a single SSL connection. As you see I have set two domain lists for the clients, one set is for HTTP requests and the other set for HTTPS requests. They are all unique domains. Would there be a problem for robots to reuse SSL connections for requesting different objects fromthe same site/domain? Robot R = { kind = "R101"; pop_model = { pop_distr = popUnif(); }; recurrence = 50%; req_rate = undef(); origins = [M1.names, M2.names: 10%]; credentials = select(totalMemberSpace, totalRobots); SslWrap wrap1 = { ssl_config_file = "/tmp/ssl.conf"; protocols = ["any"]; ciphers = ["ALL:HIGH": 100%]; rsa_key_sizes = [1024bit]; session_resumption = 40%; session_cache = 100; }; ssl_wraps = [wrap1]; addresses = robotAddrs(authAddrScheme, theBench); pconn_use_lmt = const(2147483647); idle_pconn_tout = idleConnectionTimeout; open_conn_lmt = maxConnPerRobot; http_versions = ["1.0"]; }; AddrMap M2 = { names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'... AddrMap M2 = { names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... Thank you very much, Jacky From dmitry.kurochkin at measurement-factory.com Tue Dec 18 23:20:34 2012 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Wed, 19 Dec 2012 03:20:34 +0400 Subject: Sending multiple requests in single SSL connection In-Reply-To: References: Message-ID: <87bodr7xrx.fsf@gmail.com> Hi Jacky. unjc email writes: > Hello there, > > I need some help in configuring SSL session. The following is what I > have configured for the robot. I want to configure the client > workload to send three or four requests per SSL connection. With the > current setting, I found each HTTPS request has its own SSL connection > and it is closed upon receiving the requested object. Please advise > the correct setting to configure robots to make multiple requests in a > single SSL connection. > Robot config looks good. Did you set pconn_use_lmt for Server? > As you see I have set two domain lists for the clients, one set is for > HTTP requests and the other set for HTTPS requests. They are all > unique domains. Would there be a problem for robots to reuse SSL > connections for requesting different objects fromthe same site/domain? > No. Regards, Dmitry > Robot R = { > kind = "R101"; > pop_model = { > pop_distr = popUnif(); > }; > recurrence = 50%; > req_rate = undef(); > origins = [M1.names, M2.names: 10%]; > credentials = select(totalMemberSpace, totalRobots); > SslWrap wrap1 = { > ssl_config_file = "/tmp/ssl.conf"; > protocols = ["any"]; > ciphers = ["ALL:HIGH": 100%]; > rsa_key_sizes = [1024bit]; > session_resumption = 40%; > session_cache = 100; > }; > ssl_wraps = [wrap1]; > addresses = robotAddrs(authAddrScheme, theBench); > pconn_use_lmt = const(2147483647); > idle_pconn_tout = idleConnectionTimeout; > open_conn_lmt = maxConnPerRobot; > http_versions = ["1.0"]; > }; > > AddrMap M2 = { > names = ['affiliate.de:9090','buzzfeed.com:9090','usbank.com:9090'... > > AddrMap M2 = { > names = ['google.com:9191','facebook.com:9191','youtube.com:9191'... > > > > > > Thank you very much, > Jacky > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users From unjc.email at gmail.com Fri Dec 21 20:38:24 2012 From: unjc.email at gmail.com (unjc email) Date: Fri, 21 Dec 2012 15:38:24 -0500 Subject: Separate Robots for HTTP and HTTPS traffic Message-ID: Hello there, I have already set up a workload that generates mixed http/https traffic. Since there is an issue with the https proxy, the http traffic is heavily affected because the same robots are responsible for both types of traffic. Would any of you please advise how I could configure two kinds of robots (one for http and one for https) binding to two different loop-back IP pools? I understand it will probably not be possible to distribute the load through using Robot's origins (origins = [M1.names, M2.names: 10%];) anymore. I assume I could try to dedicate different number of robots for each robot type. Thank you in advance for your help, Jacky Bench theBench = { peak_req_rate = 1000/sec; client_side = { hosts = ['192.168.128.36','192.168.128.37']; addr_space = ['lo::172.1.2-250.20-30']; max_host_load = theBench.peak_req_rate/count(client_side.hosts); max_agent_load = theBench.peak_req_rate/totalRobots; }; server_side = { hosts = ['192.168.102.206','192.168.102.207']; max_host_load = theBench.peak_req_rate; max_agent_load = theBench.peak_req_rate; }; }; Server S1 = { kind = "S101"; contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther]; direct_access = contents; addresses = M1.addresses; http_versions = ["1.0"]; }; Server S2 = { kind = "S101"; contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther]; direct_access = contents; SslWrap wrap1 = { ssl_config_file = "/tmp/ssl.conf"; protocols = ["any"]; ciphers = ["ALL:HIGH": 100%]; rsa_key_sizes = [1024bit]; session_resumption = 40%; session_cache = 100; }; ssl_wraps = [wrap1]; addresses = M2.addresses; http_versions = ["1.0"]; }; Robot R = { kind = "R101"; pop_model = { pop_distr = popUnif(); }; recurrence = 50%; req_rate = undef(); origins = [M1.names, M2.names: 10%]; credentials = select(totalMemberSpace, totalRobots); SslWrap wrap1 = { ssl_config_file = "/tmp/ssl.conf"; protocols = ["any"]; ciphers = ["ALL:HIGH": 100%]; rsa_key_sizes = [1024bit]; session_resumption = 40%; session_cache = 100; }; ssl_wraps = [wrap1]; addresses = robotAddrs(authAddrScheme, theBench); pconn_use_lmt = const(2147483647); idle_pconn_tout = idleConnectionTimeout; open_conn_lmt = maxConnPerRobot; http_versions = ["1.0"]; }; From rousskov at measurement-factory.com Sat Dec 22 00:45:48 2012 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Fri, 21 Dec 2012 17:45:48 -0700 Subject: Separate Robots for HTTP and HTTPS traffic In-Reply-To: References: Message-ID: <50D502BC.2040809@measurement-factory.com> On 12/21/2012 01:38 PM, unjc email wrote: > I have already set up a workload that generates mixed http/https > traffic. Since there is an issue with the https proxy, the http > traffic is heavily affected because the same robots are responsible > for both types of traffic. Just FYI: This is, in part, a side-effect of your best-effort workload. In constant-pressure workloads (Robot.req_rate is defined), individual Robot transactions may share open connection limits but not much else, and so SSL proxy problems do not decrease HTTP traffic rate. Please be extra careful with best-effort workloads as they often produce misleading results. > Would any of you please advise how I could > configure two kinds of robots (one for http and one for https) binding > to two different loop-back IP pools? Just define and use to Robot objects. You already have two Server objects. You can do the same with Robots. If you want to reduce PGL code duplication, you can use this trick: Robot rCommon = { ... settings common to both robots ... }; Robot rSecure = rCommon; rSecure = { ... settings specific to the SSL robot ... }; Robot rPlain = rCommon; rPlain = { ... settings specific to the HTTP robot ... }; // use both robots after finalizing all their details use(rSecure, rPlain); > I understand it will probably not be possible to distribute the load > through using Robot's origins (origins = [M1.names, M2.names: 10%];) > anymore. I assume I could try to dedicate different number of robots > for each robot type. Yes, but you can apply a similar trick to robot addresses instead of origin addresses: // compute addresses for all robots theBench.client_side.addresses = robotAddrs(authAddrScheme, theBench); // randomly split computed addresses across two robot categories [ rSecure.addresses: 10%, rPlain.addresses ] = theBench.client_side.addresses; HTH, Alex. > Bench theBench = { > peak_req_rate = 1000/sec; > client_side = { > hosts = ['192.168.128.36','192.168.128.37']; > addr_space = ['lo::172.1.2-250.20-30']; > max_host_load = theBench.peak_req_rate/count(client_side.hosts); > max_agent_load = theBench.peak_req_rate/totalRobots; > }; > server_side = { > hosts = ['192.168.102.206','192.168.102.207']; > max_host_load = theBench.peak_req_rate; > max_agent_load = theBench.peak_req_rate; > }; > }; > > Server S1 = { > kind = "S101"; > contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: > 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther]; > direct_access = contents; > addresses = M1.addresses; > http_versions = ["1.0"]; > }; > > Server S2 = { > kind = "S101"; > contents = [JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: > 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther]; > direct_access = contents; > SslWrap wrap1 = { > ssl_config_file = "/tmp/ssl.conf"; > protocols = ["any"]; > ciphers = ["ALL:HIGH": 100%]; > rsa_key_sizes = [1024bit]; > session_resumption = 40%; > session_cache = 100; > }; > ssl_wraps = [wrap1]; > addresses = M2.addresses; > http_versions = ["1.0"]; > }; > > Robot R = { > kind = "R101"; > pop_model = { > pop_distr = popUnif(); > }; > recurrence = 50%; > req_rate = undef(); > origins = [M1.names, M2.names: 10%]; > credentials = select(totalMemberSpace, totalRobots); > SslWrap wrap1 = { > ssl_config_file = "/tmp/ssl.conf"; > protocols = ["any"]; > ciphers = ["ALL:HIGH": 100%]; > rsa_key_sizes = [1024bit]; > session_resumption = 40%; > session_cache = 100; > }; > ssl_wraps = [wrap1]; > addresses = robotAddrs(authAddrScheme, theBench); > pconn_use_lmt = const(2147483647); > idle_pconn_tout = idleConnectionTimeout; > open_conn_lmt = maxConnPerRobot; > http_versions = ["1.0"]; > }; > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users >