From Nagaraja_Gundurao at symantec.com Fri Aug 4 18:57:07 2017 From: Nagaraja_Gundurao at symantec.com (Nagaraja Gundurao) Date: Fri, 4 Aug 2017 18:57:07 +0000 Subject: Questions regarding charts Message-ID: <53B99E92-23BD-4B72-BDB5-12135125FDF8@symantec.com> Hi, We are running traffic against a web proxy and the proxy is configured to not *cache* anything. However, after the test run, when we look at this chart that is attached to this email, we notice that there are two lines in the chart, one for misses and the other for all replies. If the proxy is totally non caching then I am expecting there should be only one line with all *misses* right?. Cheers, Nagaraja -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2017-08-04 at 11.55.21 AM.png Type: image/png Size: 81801 bytes Desc: Screen Shot 2017-08-04 at 11.55.21 AM.png URL: From rousskov at measurement-factory.com Fri Aug 4 21:32:41 2017 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Fri, 4 Aug 2017 15:32:41 -0600 Subject: Questions regarding charts In-Reply-To: <53B99E92-23BD-4B72-BDB5-12135125FDF8@symantec.com> References: <53B99E92-23BD-4B72-BDB5-12135125FDF8@symantec.com> Message-ID: <3faaa793-94af-6b7e-2bf7-2969d7e836a3@measurement-factory.com> On 08/04/2017 12:57 PM, Nagaraja Gundurao wrote: > We are running traffic against a web proxy and the proxy is > configured to not *cache* anything. However, after the test run, > when we look at this chart that is attached to this email, we notice > that there are two lines in the chart, one for misses and the other > for all replies. If the proxy is totally non caching then I am > expecting there should be only one line with all *misses* right?. The notions of "hit" and "miss" are only defined for "basic" transactions. I suspect your test has a significant portion of transactions which are not basic. For example, reloads, aborted transactions, transactions with Range or If-Modified-Since headers, various HTTP redirects, and CONNECT transactions are _not_ basic. If you can post the entire generated report or just the corresponding .lx file (output of polygraph-lx applied to the same set of binary logs), then I may be able to explain what is going on in terms specific to your test. Thank you, Alex. From Nagaraja_Gundurao at symantec.com Mon Aug 7 23:57:53 2017 From: Nagaraja_Gundurao at symantec.com (Nagaraja Gundurao) Date: Mon, 7 Aug 2017 23:57:53 +0000 Subject: Need help in setting up mixed traffic Message-ID: <0461546F-8A2B-47EA-8E20-C36C37856EDA@symantec.com> Hi, I am trying to achieve something like this +----------const(5KB)------------\ Client(WPG -| |--------proxy--------------Server(WPG) +----------CDB traffic------------/ client.pg server.pg This is what I plan to achieve 1. The client.pg file has two robots defined, R1 for const(5KB) and R2 for CDB traffic(realistic content simulation) 2. On the server, server.pg, I have defined two servers to server traffic for const(5KB) and cdb traffic. Problem: When I initiate the traffic, I see traffic for only one. For eg. if the entry in the server.pg has use(S1,S2) where S1 Is for const(5KB) and S2 is for cdb, then I see only traffic for 5KB In the use, entry, if I switch the entries to show, use(S2,S1) now I see only cdb traffic and not const(5KB). At anytime I did not see both the traffic coming through the proxy. I am listing some of the errors here and also request you to send me(if this configuration is valid) an example file please. Client.pg /* * A very simple "Hello, World!" workload */ // this is just one of the simplest workloads that can produce hits // never use this workload for benchmarking // SimpleContent defines properties of content that the server generates; // if you get no hits, set SimpleContent.obj_life_cycle to cntStatic, which // is defined in workloads/include/contents.pg Content SimpleContent = { size = const(64KB); cachable = 80%; // 20% of content is uncachable }; AddrMap M = { names = [ 'www.dropbox.com' ]; addresses = [ '10.0.15.60:443' ]; //addresses = S.addresses; // names = tracedHosts(R.foreign_trace); }; DnsResolver dr = { servers = [ '10.0.15.60:53' ]; timeout = 5sec; }; SslWrap wrap = { protocols = [ "any" ]; root_certificate = "/home/xxx/xx.pem"; //ciphers = [ "ALL:HIGH:" : 100% ]; ciphers = [ "ALL:!DES-CBC-SHA:!EXP-DES-CBC-SHA:!EXP-RC4-MD5:!EXP-RC2-CBC-MD5:" : 100% ]; rsa_key_sizes = [ 512bit, 1024bit, 2048bit ]; session_resumption = 40%; session_cache = 100; verify_peer_certificate = false; }; use(M); // a primitive server cleverly labeled "S101" // normally, you would specify more properties, // but we will mostly rely on defaults for now Server S = { kind = "S101"; contents = [ SimpleContent ]; direct_access = contents; addresses = ['10.0.15.60:443' ]; // where to create these server agents ssl_wraps = [ wrap ]; }; // a primitive robot Robot R = { kind = "R101"; interests = [ "foreign" ]; foreign_trace = "/home/xx/xx.log"; pop_model = { pop_distr = popUnif(); }; recurrence = 55% / SimpleContent.cachable; // adjusted to get 55% DHR origins = S.addresses; // where the origin servers are dns_resolver = dr; ssl_wraps = [ wrap ]; MimeHeader user1 = 'ELASTICA_MAGIC_COOKIE: 280509165510:xx.user1 at xx'; MimeHeader Host = 'Host: drive.google.com'; MimeHeader User_Agent = 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:53.0) Gecko/20100101 Firefox/53.0'; MimeHeader Accept = 'Accept: */*'; MimeHeader Accept_Language = 'Accept-Language: en-US,en;q=0.5'; MimeHeader Accept_Encoding = 'Accept-Encoding: gzip, deflate, br'; http_headers = [[user1,Host,User_Agent,Accept,Accept_Language,Accept_Encoding,Referer,Cookie,x_elastica_gw,el_auth_param]: 100%]; addresses = ['10.0.15.105' ** 1 ]; // where these robot agents will be created // req_rate = 0.1/sec; }; Robot R1 = { pop_model = { pop_distr = popUnif(); }; recurrence = 55% / SimpleContent.cachable; // adjusted to get 55% DHR origins = M.names; // where the origin servers are dns_resolver = dr; ssl_wraps = [ wrap ]; //session.busy_period.duration = 1sec; //session.idle_period_duration = exp(11sec); MimeHeader user1 = 'MAGIC_COOKIE: 280509165510:xxuser1 at xx.com'; http_headers = [user1: 100%]; addresses = ['10.0.15.105' ** 7 ]; // where these robot agents will be created }; Phase phRampUp = { name = "rampup"; goal.duration = 5min; populus_factor_beg = 0; populus_factor_end = 1;}; Phase phRampDown = { name = "rampdown"; goal.duration = 10sec; populus_factor_beg = 1; populus_factor_end = 0;}; Phase phSustain = { name = "sustain"; goal.duration = 60min; populus_factor_beg = 1; populus_factor_end = 1;}; schedule(phRampUp, phSustain); use(S,R1,R); server.pg /* * A very simple "Hello, World!" workload */ // this is just one of the simplest workloads that can produce hits // never use this workload for benchmarking // SimpleContent defines properties of content that the server generates; // if you get no hits, set SimpleContent.obj_life_cycle to cntStatic, which // is defined in workloads/include/contents.pg Content SimpleContent = { //size = const(64KB); content_db = "/home/yy/yy.cdb"; cachable = 80%; // 20% of content is uncachable }; Content SimpleContent1 = { size = const(5KB); cachable = 80%; // 20% of content is uncachable }; DnsResolver dr = { servers = [ '10.0.15.60:53' ]; timeout = 5sec; }; SslWrap wrap = { protocols = [ "any" ]; root_certificate = "/yy/yy.pem"; //ciphers = [ "ALL:HIGH:" : 100% ]; ciphers = [ "ALL:!DES-CBC-SHA:!EXP-DES-CBC-SHA:!EXP-RC4-MD5:!EXP-RC2-CBC-MD5:" : 100% ]; rsa_key_sizes = [ 512bit, 1024bit, 2048bit ]; session_resumption = 40%; session_cache = 100; verify_peer_certificate = false; }; // a primitive server cleverly labeled "S101" // normally, you would specify more properties, // but we will mostly rely on defaults for now Server S = { kind = "S101"; contents = [ SimpleContent : 70%, SimpleContent1 : 30% ]; direct_access = contents; addresses = ['10.0.15.60:443' ]; // where to create these server agents ssl_wraps = [ wrap ]; }; // a primitive robot Robot R = { kind = "R101"; interests = [ "foreign" ]; foreign_trace = "/home/yy/yy.log"; pop_model = { pop_distr = popUnif(); }; recurrence = 55% / SimpleContent.cachable; // adjusted to get 55% DHR origins = S.addresses; // where the origin servers are dns_resolver = dr; ssl_wraps = [ wrap ]; MimeHeader user1 = 'MAGIC_COOKIE: 666923300190:yy.user1 at yy'; MimeHeader Host = 'Host: drive.google.com'; //MimeHeader User-Agent = 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:40.0) Gecko/20100101 Firefox/40.0'; //MimeHeader Accept = 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'; //MimeHeader Accept-Language = 'Accept-Language: en-US,en;q=0.5'; http_headers = [user1: 100%]; addresses = ['10.0.15.105' ** 1 ]; // where these robot agents will be created req_rate = 0.1/sec; }; // a 1:1 map AddrMap M = { names = [ 'www.drive.google.com', 'dropbox.com' ]; addresses = [ '10.0.15.60:80', '10.0.15.60:443' ]; addresses = S.addresses; names = tracedHosts(R.foreign_trace); }; Phase phRampUp = { name = "rampup"; goal.duration = 10sec; populus_factor_beg = 0; populus_factor_end = 1;}; Phase phRampDown = { name = "rampdown"; goal.duration = 10sec; populus_factor_beg = 1; populus_factor_end = 0;}; Phase phSustain = { name = "sustain"; goal.duration = 60min; populus_factor_beg = 1; populus_factor_end = 1;}; // build schedule using some well-known phases and phases defined above schedule(phRampUp, phSustain); //use(M); use(S); Errors -------------- next part -------------- An HTML attachment was scrubbed... URL: From rousskov at measurement-factory.com Tue Aug 8 02:13:31 2017 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Mon, 7 Aug 2017 20:13:31 -0600 Subject: Need help in setting up mixed traffic In-Reply-To: <0461546F-8A2B-47EA-8E20-C36C37856EDA@symantec.com> References: <0461546F-8A2B-47EA-8E20-C36C37856EDA@symantec.com> Message-ID: <539a1aa5-3111-a5eb-7795-6e2ab6710bd7@measurement-factory.com> On 08/07/2017 05:57 PM, Nagaraja Gundurao wrote: > 1. The client.pg file has two robots defined, R1 for const(5KB) > and R2 for CDB traffic(realistic content simulation) > > 2. On the server, server.pg, I have defined two servers to server > traffic for const(5KB) and cdb traffic. If at all possible, please use a single file that describes all aspects of the test. In other words, do _not_ use different workload files for different "sides" of the test. Polygraph does not really care (yet) as long as all files are consistent, but it is very easy for humans to make configuration mistakes (or misunderstand configurations) when dealing with multiple configuration files that are all meant to describe a single test. For the record, there are very rare cases where side-specific workload files are required. AFAICT, nothing in your email indicates that you are dealing with one of such cases. However, even in those cases, side-specific workload files should be auto-generated from a single PGL file that humans edit or differ only in some primitive #includes. > Problem: When I initiate the traffic, I see traffic for only one. For > eg. if the entry in the server.pg has use(S1,S2) where S1 > > Is for const(5KB) and S2 is for cdb, then I see only traffic for 5KB > > In the use, entry, if I switch the entries to show, > use(S2,S1) now I see only cdb traffic and not const(5KB). At anytime > > I did not see both the traffic coming through the proxy. I think I understand the problem you are describing but since neither your server.pg file nor your client.pg file contain S1 and S2 servers, it is very difficult for me to guess what exactly is going on. Besides, reading two probably conflicting files confuses me a lot! My recommendation is to merge the two files together, thinking about the test as a whole. Chances are, once you polish your workload that way, this particular problem will disappear. If it does not, please repost the merged file showing both servers. BTW, if you want to model two content types (cdb and basic) being served by one server, then you do not need to define two PGL Servers. > I am listing some of the errors here The errors did not reach the mailing list. For the future, please attach workload files and error logs rather than copy-pasting them. Thank you, Alex.